11:44:55 Started by timer 11:44:55 Running as SYSTEM 11:44:55 [EnvInject] - Loading node environment variables. 11:44:55 Building remotely on prd-ubuntu1804-docker-8c-8g-21584 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp 11:44:55 [ssh-agent] Looking for ssh-agent implementation... 11:44:55 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:44:55 $ ssh-agent 11:44:55 SSH_AUTH_SOCK=/tmp/ssh-5pN1hVrKcM5n/agent.2073 11:44:55 SSH_AGENT_PID=2075 11:44:55 [ssh-agent] Started. 11:44:55 Running ssh-add (command line suppressed) 11:44:55 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_6327988617652414941.key (/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_6327988617652414941.key) 11:44:55 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:44:55 The recommended git tool is: NONE 11:44:57 using credential onap-jenkins-ssh 11:44:57 Wiping out workspace first. 11:44:57 Cloning the remote Git repository 11:44:57 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 11:44:57 > git init /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp # timeout=10 11:44:57 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 11:44:57 > git --version # timeout=10 11:44:57 > git --version # 'git version 2.17.1' 11:44:57 using GIT_SSH to set credentials Gerrit user 11:44:57 Verifying host key using manually-configured host key entries 11:44:57 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:44:57 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 11:44:57 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:44:58 Avoid second fetch 11:44:58 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 11:44:58 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/remotes/origin/master) 11:44:58 > git config core.sparsecheckout # timeout=10 11:44:58 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 11:44:58 Commit message: "Add Fix fail handling in ACM runtime in CSIT" 11:44:58 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 11:45:01 provisioning config files... 11:45:01 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:45:01 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:45:01 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins1431226351649644145.sh 11:45:01 ---> python-tools-install.sh 11:45:01 Setup pyenv: 11:45:01 * system (set by /opt/pyenv/version) 11:45:01 * 3.8.13 (set by /opt/pyenv/version) 11:45:01 * 3.9.13 (set by /opt/pyenv/version) 11:45:01 * 3.10.6 (set by /opt/pyenv/version) 11:45:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-QaKm 11:45:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:45:10 lf-activate-venv(): INFO: Installing: lftools 11:45:36 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH 11:45:36 Generating Requirements File 11:45:56 Python 3.10.6 11:45:56 pip 25.1.1 from /tmp/venv-QaKm/lib/python3.10/site-packages/pip (python 3.10) 11:45:57 appdirs==1.4.4 11:45:57 argcomplete==3.6.2 11:45:57 aspy.yaml==1.3.0 11:45:57 attrs==25.3.0 11:45:57 autopage==0.5.2 11:45:57 beautifulsoup4==4.13.4 11:45:57 boto3==1.38.36 11:45:57 botocore==1.38.36 11:45:57 bs4==0.0.2 11:45:57 cachetools==5.5.2 11:45:57 certifi==2025.6.15 11:45:57 cffi==1.17.1 11:45:57 cfgv==3.4.0 11:45:57 chardet==5.2.0 11:45:57 charset-normalizer==3.4.2 11:45:57 click==8.2.1 11:45:57 cliff==4.10.0 11:45:57 cmd2==2.6.1 11:45:57 cryptography==3.3.2 11:45:57 debtcollector==3.0.0 11:45:57 decorator==5.2.1 11:45:57 defusedxml==0.7.1 11:45:57 Deprecated==1.2.18 11:45:57 distlib==0.3.9 11:45:57 dnspython==2.7.0 11:45:57 docker==7.1.0 11:45:57 dogpile.cache==1.4.0 11:45:57 durationpy==0.10 11:45:57 email_validator==2.2.0 11:45:57 filelock==3.18.0 11:45:57 future==1.0.0 11:45:57 gitdb==4.0.12 11:45:57 GitPython==3.1.44 11:45:57 google-auth==2.40.3 11:45:57 httplib2==0.22.0 11:45:57 identify==2.6.12 11:45:57 idna==3.10 11:45:57 importlib-resources==1.5.0 11:45:57 iso8601==2.1.0 11:45:57 Jinja2==3.1.6 11:45:57 jmespath==1.0.1 11:45:57 jsonpatch==1.33 11:45:57 jsonpointer==3.0.0 11:45:57 jsonschema==4.24.0 11:45:57 jsonschema-specifications==2025.4.1 11:45:57 keystoneauth1==5.11.1 11:45:57 kubernetes==33.1.0 11:45:57 lftools==0.37.13 11:45:57 lxml==5.4.0 11:45:57 MarkupSafe==3.0.2 11:45:57 msgpack==1.1.1 11:45:57 multi_key_dict==2.0.3 11:45:57 munch==4.0.0 11:45:57 netaddr==1.3.0 11:45:57 niet==1.4.2 11:45:57 nodeenv==1.9.1 11:45:57 oauth2client==4.1.3 11:45:57 oauthlib==3.2.2 11:45:57 openstacksdk==4.6.0 11:45:57 os-client-config==2.1.0 11:45:57 os-service-types==1.7.0 11:45:57 osc-lib==4.0.2 11:45:57 oslo.config==9.8.0 11:45:57 oslo.context==6.0.0 11:45:57 oslo.i18n==6.5.1 11:45:57 oslo.log==7.1.0 11:45:57 oslo.serialization==5.7.0 11:45:57 oslo.utils==9.0.0 11:45:57 packaging==25.0 11:45:57 pbr==6.1.1 11:45:57 platformdirs==4.3.8 11:45:57 prettytable==3.16.0 11:45:57 psutil==7.0.0 11:45:57 pyasn1==0.6.1 11:45:57 pyasn1_modules==0.4.2 11:45:57 pycparser==2.22 11:45:57 pygerrit2==2.0.15 11:45:57 PyGithub==2.6.1 11:45:57 PyJWT==2.10.1 11:45:57 PyNaCl==1.5.0 11:45:57 pyparsing==2.4.7 11:45:57 pyperclip==1.9.0 11:45:57 pyrsistent==0.20.0 11:45:57 python-cinderclient==9.7.0 11:45:57 python-dateutil==2.9.0.post0 11:45:57 python-heatclient==4.2.0 11:45:57 python-jenkins==1.8.2 11:45:57 python-keystoneclient==5.6.0 11:45:57 python-magnumclient==4.8.1 11:45:57 python-openstackclient==8.1.0 11:45:57 python-swiftclient==4.8.0 11:45:57 PyYAML==6.0.2 11:45:57 referencing==0.36.2 11:45:57 requests==2.32.4 11:45:57 requests-oauthlib==2.0.0 11:45:57 requestsexceptions==1.4.0 11:45:57 rfc3986==2.0.0 11:45:57 rpds-py==0.25.1 11:45:57 rsa==4.9.1 11:45:57 ruamel.yaml==0.18.14 11:45:57 ruamel.yaml.clib==0.2.12 11:45:57 s3transfer==0.13.0 11:45:57 simplejson==3.20.1 11:45:57 six==1.17.0 11:45:57 smmap==5.0.2 11:45:57 soupsieve==2.7 11:45:57 stevedore==5.4.1 11:45:57 tabulate==0.9.0 11:45:57 toml==0.10.2 11:45:57 tomlkit==0.13.3 11:45:57 tqdm==4.67.1 11:45:57 typing_extensions==4.14.0 11:45:57 tzdata==2025.2 11:45:57 urllib3==1.26.20 11:45:57 virtualenv==20.31.2 11:45:57 wcwidth==0.2.13 11:45:57 websocket-client==1.8.0 11:45:57 wrapt==1.17.2 11:45:57 xdg==6.0.0 11:45:57 xmltodict==0.14.2 11:45:57 yq==3.4.3 11:45:57 [EnvInject] - Injecting environment variables from a build step. 11:45:57 [EnvInject] - Injecting as environment variables the properties content 11:45:57 SET_JDK_VERSION=openjdk17 11:45:57 GIT_URL="git://cloud.onap.org/mirror" 11:45:57 11:45:57 [EnvInject] - Variables injected successfully. 11:45:57 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh /tmp/jenkins6357084058631707015.sh 11:45:57 ---> update-java-alternatives.sh 11:45:57 ---> Updating Java version 11:45:57 ---> Ubuntu/Debian system detected 11:45:57 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:45:57 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:45:57 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:45:57 openjdk version "17.0.4" 2022-07-19 11:45:57 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 11:45:57 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 11:45:57 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 11:45:57 [EnvInject] - Injecting environment variables from a build step. 11:45:57 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:45:57 [EnvInject] - Variables injected successfully. 11:45:57 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh -xe /tmp/jenkins11227102259556747411.sh 11:45:57 + /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/run-project-csit.sh policy-opa-pdp 11:45:58 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 11:45:58 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 11:45:58 Configure a credential helper to remove this warning. See 11:45:58 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 11:45:58 11:45:58 Login Succeeded 11:45:58 docker: 'compose' is not a docker command. 11:45:58 See 'docker --help' 11:45:58 Docker Compose Plugin not installed. Installing now... 11:45:58 % Total % Received % Xferd Average Speed Time Time Time Current 11:45:58 Dload Upload Total Spent Left Speed 11:45:58 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 11:45:59 71 60.2M 71 43.2M 0 0 64.9M 0 --:--:-- --:--:-- --:--:-- 64.9M 100 60.2M 100 60.2M 0 0 72.8M 0 --:--:-- --:--:-- --:--:-- 106M 11:45:59 Setting project configuration for: policy-opa-pdp 11:45:59 Configuring docker compose... 11:46:01 Starting opa-pdp using postgres + Grafana/Prometheus 11:46:01 prometheus Pulling 11:46:01 postgres Pulling 11:46:01 kafka Pulling 11:46:01 opa-pdp Pulling 11:46:01 zookeeper Pulling 11:46:01 pap Pulling 11:46:01 api Pulling 11:46:01 grafana Pulling 11:46:01 policy-db-migrator Pulling 11:46:01 da9db072f522 Pulling fs layer 11:46:01 96e38c8865ba Pulling fs layer 11:46:01 5e06c6bed798 Pulling fs layer 11:46:01 684be6598fc9 Pulling fs layer 11:46:01 0d92cad902ba Pulling fs layer 11:46:01 dcc0c3b2850c Pulling fs layer 11:46:01 eb7cda286a15 Pulling fs layer 11:46:01 0d92cad902ba Waiting 11:46:01 dcc0c3b2850c Waiting 11:46:01 eb7cda286a15 Waiting 11:46:01 684be6598fc9 Waiting 11:46:01 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:01 5e06c6bed798 Downloading [==================================================>] 296B/296B 11:46:01 5e06c6bed798 Verifying Checksum 11:46:01 5e06c6bed798 Download complete 11:46:01 eca0188f477e Pulling fs layer 11:46:01 e444bcd4d577 Pulling fs layer 11:46:01 eabd8714fec9 Pulling fs layer 11:46:01 45fd2fec8a19 Pulling fs layer 11:46:01 8f10199ed94b Pulling fs layer 11:46:01 f963a77d2726 Pulling fs layer 11:46:01 f3a82e9f1761 Pulling fs layer 11:46:01 79161a3f5362 Pulling fs layer 11:46:01 9c266ba63f51 Pulling fs layer 11:46:01 2e8a7df9c2ee Pulling fs layer 11:46:01 e444bcd4d577 Waiting 11:46:01 10f05dd8b1db Pulling fs layer 11:46:01 41dac8b43ba6 Pulling fs layer 11:46:01 eca0188f477e Waiting 11:46:01 71a9f6a9ab4d Pulling fs layer 11:46:01 da3ed5db7103 Pulling fs layer 11:46:01 eabd8714fec9 Waiting 11:46:01 2e8a7df9c2ee Waiting 11:46:01 c955f6e31a04 Pulling fs layer 11:46:01 45fd2fec8a19 Waiting 11:46:01 10f05dd8b1db Waiting 11:46:01 da3ed5db7103 Waiting 11:46:01 41dac8b43ba6 Waiting 11:46:01 8f10199ed94b Waiting 11:46:01 71a9f6a9ab4d Waiting 11:46:01 c955f6e31a04 Waiting 11:46:01 f963a77d2726 Waiting 11:46:01 79161a3f5362 Waiting 11:46:01 f3a82e9f1761 Waiting 11:46:01 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 11:46:01 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 11:46:01 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 11:46:01 684be6598fc9 Verifying Checksum 11:46:01 684be6598fc9 Download complete 11:46:01 da9db072f522 Pulling fs layer 11:46:01 19ede2622bd6 Pulling fs layer 11:46:01 81f92f6326a0 Pulling fs layer 11:46:01 774184111a51 Pulling fs layer 11:46:01 ba3bfa42d232 Pulling fs layer 11:46:01 8e7191d1a9d6 Pulling fs layer 11:46:01 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:01 43449fa9f0bf Pulling fs layer 11:46:01 81f92f6326a0 Waiting 11:46:01 25fd4437207e Pulling fs layer 11:46:01 774184111a51 Waiting 11:46:01 19ede2622bd6 Waiting 11:46:01 ba3bfa42d232 Waiting 11:46:01 43449fa9f0bf Waiting 11:46:01 25fd4437207e Waiting 11:46:01 8e7191d1a9d6 Waiting 11:46:01 da9db072f522 Pulling fs layer 11:46:01 96e38c8865ba Pulling fs layer 11:46:01 e5d7009d9e55 Pulling fs layer 11:46:01 1ec5fb03eaee Pulling fs layer 11:46:01 d3165a332ae3 Pulling fs layer 11:46:01 c124ba1a8b26 Pulling fs layer 11:46:01 6394804c2196 Pulling fs layer 11:46:01 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:01 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 11:46:01 e5d7009d9e55 Waiting 11:46:01 1ec5fb03eaee Waiting 11:46:01 d3165a332ae3 Waiting 11:46:01 c124ba1a8b26 Waiting 11:46:01 6394804c2196 Waiting 11:46:01 f18232174bc9 Pulling fs layer 11:46:01 e60d9caeb0b8 Pulling fs layer 11:46:01 f61a19743345 Pulling fs layer 11:46:01 f18232174bc9 Waiting 11:46:01 8af57d8c9f49 Pulling fs layer 11:46:01 e60d9caeb0b8 Waiting 11:46:01 f61a19743345 Waiting 11:46:01 c53a11b7c6fc Pulling fs layer 11:46:01 e032d0a5e409 Pulling fs layer 11:46:01 8af57d8c9f49 Waiting 11:46:01 c49e0ee60bfb Pulling fs layer 11:46:01 384497dbce3b Pulling fs layer 11:46:01 055b9255fa03 Pulling fs layer 11:46:01 c53a11b7c6fc Waiting 11:46:01 b176d7edde70 Pulling fs layer 11:46:01 384497dbce3b Waiting 11:46:01 e032d0a5e409 Waiting 11:46:01 b176d7edde70 Waiting 11:46:01 c49e0ee60bfb Waiting 11:46:01 055b9255fa03 Waiting 11:46:01 da9db072f522 Download complete 11:46:01 da9db072f522 Download complete 11:46:01 da9db072f522 Download complete 11:46:01 9fa9226be034 Pulling fs layer 11:46:01 1617e25568b2 Pulling fs layer 11:46:01 6ac0e4adf315 Pulling fs layer 11:46:01 f3b09c502777 Pulling fs layer 11:46:01 408012a7b118 Pulling fs layer 11:46:01 44986281b8b9 Pulling fs layer 11:46:01 bf70c5107ab5 Pulling fs layer 11:46:01 0d92cad902ba Download complete 11:46:01 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:01 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:01 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:01 1ccde423731d Pulling fs layer 11:46:01 7221d93db8a9 Pulling fs layer 11:46:01 1617e25568b2 Waiting 11:46:01 7df673c7455d Pulling fs layer 11:46:01 6ac0e4adf315 Waiting 11:46:01 f3b09c502777 Waiting 11:46:01 408012a7b118 Waiting 11:46:01 7221d93db8a9 Waiting 11:46:01 44986281b8b9 Waiting 11:46:01 7df673c7455d Waiting 11:46:01 bf70c5107ab5 Waiting 11:46:01 1ccde423731d Waiting 11:46:01 9fa9226be034 Waiting 11:46:01 2d429b9e73a6 Pulling fs layer 11:46:01 46eab5b44a35 Pulling fs layer 11:46:01 c4d302cc468d Pulling fs layer 11:46:01 01e0882c90d9 Pulling fs layer 11:46:01 2d429b9e73a6 Waiting 11:46:01 46eab5b44a35 Waiting 11:46:01 c4d302cc468d Waiting 11:46:01 531ee2cf3c0c Pulling fs layer 11:46:01 ed54a7dee1d8 Pulling fs layer 11:46:01 12c5c803443f Pulling fs layer 11:46:01 e27c75a98748 Pulling fs layer 11:46:01 e73cb4a42719 Pulling fs layer 11:46:01 a83b68436f09 Pulling fs layer 11:46:01 787d6bee9571 Pulling fs layer 11:46:01 13ff0988aaea Pulling fs layer 11:46:01 4b82842ab819 Pulling fs layer 11:46:01 7e568a0dc8fb Pulling fs layer 11:46:01 01e0882c90d9 Waiting 11:46:01 e27c75a98748 Waiting 11:46:01 787d6bee9571 Waiting 11:46:01 e73cb4a42719 Waiting 11:46:01 a83b68436f09 Waiting 11:46:01 13ff0988aaea Waiting 11:46:01 4b82842ab819 Waiting 11:46:01 7e568a0dc8fb Waiting 11:46:01 12c5c803443f Waiting 11:46:01 531ee2cf3c0c Waiting 11:46:01 ed54a7dee1d8 Waiting 11:46:01 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 11:46:01 eb7cda286a15 Verifying Checksum 11:46:01 eb7cda286a15 Download complete 11:46:01 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 11:46:01 eca0188f477e Downloading [> ] 375.7kB/37.17MB 11:46:01 1e017ebebdbd Pulling fs layer 11:46:01 55f2b468da67 Pulling fs layer 11:46:01 82bfc142787e Pulling fs layer 11:46:01 46baca71a4ef Pulling fs layer 11:46:01 b0e0ef7895f4 Pulling fs layer 11:46:01 c0c90eeb8aca Pulling fs layer 11:46:01 5cfb27c10ea5 Pulling fs layer 11:46:01 40a5eed61bb0 Pulling fs layer 11:46:01 e040ea11fa10 Pulling fs layer 11:46:01 09d5a3f70313 Pulling fs layer 11:46:01 82bfc142787e Waiting 11:46:01 356f5c2c843b Pulling fs layer 11:46:01 46baca71a4ef Waiting 11:46:01 b0e0ef7895f4 Waiting 11:46:01 55f2b468da67 Waiting 11:46:01 c0c90eeb8aca Waiting 11:46:01 1e017ebebdbd Waiting 11:46:01 40a5eed61bb0 Waiting 11:46:01 09d5a3f70313 Waiting 11:46:01 5cfb27c10ea5 Waiting 11:46:01 e040ea11fa10 Waiting 11:46:01 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB 11:46:01 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB 11:46:01 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:01 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:01 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:01 dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB 11:46:01 eca0188f477e Downloading [===========> ] 8.666MB/37.17MB 11:46:01 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:01 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:01 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:01 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB 11:46:01 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB 11:46:01 da9db072f522 Pull complete 11:46:01 da9db072f522 Pull complete 11:46:01 da9db072f522 Pull complete 11:46:01 dcc0c3b2850c Downloading [===========> ] 17.84MB/76.12MB 11:46:01 eca0188f477e Downloading [========================> ] 18.46MB/37.17MB 11:46:01 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 11:46:01 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 11:46:01 dcc0c3b2850c Downloading [===================> ] 29.74MB/76.12MB 11:46:02 eca0188f477e Downloading [========================================> ] 30.15MB/37.17MB 11:46:02 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 11:46:02 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 11:46:02 eca0188f477e Verifying Checksum 11:46:02 eca0188f477e Download complete 11:46:02 e444bcd4d577 Downloading [==================================================>] 279B/279B 11:46:02 e444bcd4d577 Verifying Checksum 11:46:02 e444bcd4d577 Download complete 11:46:02 dcc0c3b2850c Downloading [==========================> ] 40.55MB/76.12MB 11:46:02 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB 11:46:02 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB 11:46:02 eabd8714fec9 Downloading [> ] 539.6kB/375MB 11:46:02 eca0188f477e Extracting [> ] 393.2kB/37.17MB 11:46:02 dcc0c3b2850c Downloading [====================================> ] 55.15MB/76.12MB 11:46:02 96e38c8865ba Verifying Checksum 11:46:02 96e38c8865ba Download complete 11:46:02 96e38c8865ba Download complete 11:46:02 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 11:46:02 45fd2fec8a19 Verifying Checksum 11:46:02 45fd2fec8a19 Download complete 11:46:02 eabd8714fec9 Downloading [=> ] 9.19MB/375MB 11:46:02 f90c8eb4724c Pulling fs layer 11:46:02 2b1b549e99de Pulling fs layer 11:46:02 547372ea8ffa Pulling fs layer 11:46:02 65d25c0f02f3 Pulling fs layer 11:46:02 90dd78f85976 Pulling fs layer 11:46:02 4f4fb700ef54 Pulling fs layer 11:46:02 f90c8eb4724c Waiting 11:46:02 2b1b549e99de Waiting 11:46:02 547372ea8ffa Waiting 11:46:02 65d25c0f02f3 Waiting 11:46:02 90dd78f85976 Waiting 11:46:02 4f4fb700ef54 Waiting 11:46:02 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 11:46:02 eca0188f477e Extracting [========> ] 6.291MB/37.17MB 11:46:02 dcc0c3b2850c Downloading [=============================================> ] 69.2MB/76.12MB 11:46:02 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 11:46:02 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 11:46:02 eabd8714fec9 Downloading [==> ] 20.54MB/375MB 11:46:02 8f10199ed94b Downloading [======================================> ] 6.782MB/8.768MB 11:46:02 dcc0c3b2850c Verifying Checksum 11:46:02 dcc0c3b2850c Download complete 11:46:02 8f10199ed94b Verifying Checksum 11:46:02 8f10199ed94b Download complete 11:46:02 eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 11:46:02 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 11:46:02 f963a77d2726 Download complete 11:46:02 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 11:46:02 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 11:46:02 79161a3f5362 Verifying Checksum 11:46:02 79161a3f5362 Download complete 11:46:02 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 11:46:02 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 11:46:02 9c266ba63f51 Verifying Checksum 11:46:02 9c266ba63f51 Download complete 11:46:02 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 11:46:02 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 11:46:02 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 11:46:02 2e8a7df9c2ee Verifying Checksum 11:46:02 2e8a7df9c2ee Download complete 11:46:02 eabd8714fec9 Downloading [====> ] 35.14MB/375MB 11:46:02 10f05dd8b1db Downloading [==================================================>] 98B/98B 11:46:02 10f05dd8b1db Verifying Checksum 11:46:02 10f05dd8b1db Download complete 11:46:02 41dac8b43ba6 Downloading [==================================================>] 171B/171B 11:46:02 41dac8b43ba6 Verifying Checksum 11:46:02 41dac8b43ba6 Download complete 11:46:02 eca0188f477e Extracting [====================> ] 14.94MB/37.17MB 11:46:02 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 11:46:02 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 11:46:02 71a9f6a9ab4d Verifying Checksum 11:46:02 71a9f6a9ab4d Download complete 11:46:02 f3a82e9f1761 Downloading [========> ] 7.798MB/44.41MB 11:46:02 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 11:46:02 eabd8714fec9 Downloading [======> ] 51.36MB/375MB 11:46:02 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 11:46:02 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 11:46:02 eca0188f477e Extracting [=============================> ] 22.02MB/37.17MB 11:46:02 f3a82e9f1761 Downloading [=====================> ] 18.81MB/44.41MB 11:46:02 da3ed5db7103 Downloading [===> ] 9.19MB/127.4MB 11:46:02 eabd8714fec9 Downloading [=========> ] 71.37MB/375MB 11:46:02 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB 11:46:02 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB 11:46:02 eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 11:46:02 f3a82e9f1761 Downloading [======================================> ] 33.95MB/44.41MB 11:46:02 da3ed5db7103 Downloading [========> ] 20.54MB/127.4MB 11:46:02 eabd8714fec9 Downloading [===========> ] 88.67MB/375MB 11:46:02 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 11:46:02 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 11:46:02 eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB 11:46:02 f3a82e9f1761 Verifying Checksum 11:46:02 f3a82e9f1761 Download complete 11:46:02 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 11:46:02 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 11:46:02 c955f6e31a04 Verifying Checksum 11:46:02 c955f6e31a04 Download complete 11:46:02 da3ed5db7103 Downloading [===========> ] 30.28MB/127.4MB 11:46:02 eabd8714fec9 Downloading [=============> ] 104.9MB/375MB 11:46:02 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 11:46:02 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 11:46:02 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB 11:46:02 eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 11:46:02 da3ed5db7103 Downloading [=================> ] 44.33MB/127.4MB 11:46:02 eabd8714fec9 Downloading [===============> ] 118.9MB/375MB 11:46:02 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 11:46:02 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 11:46:02 19ede2622bd6 Downloading [=====> ] 8.109MB/71.91MB 11:46:02 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 11:46:03 eabd8714fec9 Downloading [=================> ] 131.9MB/375MB 11:46:03 da3ed5db7103 Downloading [======================> ] 58.39MB/127.4MB 11:46:03 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 11:46:03 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 11:46:03 19ede2622bd6 Downloading [=============> ] 19.46MB/71.91MB 11:46:03 eca0188f477e Pull complete 11:46:03 e444bcd4d577 Extracting [==================================================>] 279B/279B 11:46:03 e444bcd4d577 Extracting [==================================================>] 279B/279B 11:46:03 eabd8714fec9 Downloading [===================> ] 143.8MB/375MB 11:46:03 da3ed5db7103 Downloading [===========================> ] 71.37MB/127.4MB 11:46:03 19ede2622bd6 Downloading [========================> ] 35.68MB/71.91MB 11:46:03 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 11:46:03 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 11:46:03 e444bcd4d577 Pull complete 11:46:03 eabd8714fec9 Downloading [====================> ] 156.3MB/375MB 11:46:03 da3ed5db7103 Downloading [================================> ] 82.72MB/127.4MB 11:46:03 19ede2622bd6 Downloading [=================================> ] 48.12MB/71.91MB 11:46:03 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 11:46:03 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 11:46:03 eabd8714fec9 Downloading [======================> ] 167.1MB/375MB 11:46:03 da3ed5db7103 Downloading [====================================> ] 94.08MB/127.4MB 11:46:03 19ede2622bd6 Downloading [========================================> ] 58.39MB/71.91MB 11:46:03 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 11:46:03 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 11:46:03 eabd8714fec9 Downloading [=======================> ] 177.9MB/375MB 11:46:03 da3ed5db7103 Downloading [=========================================> ] 104.9MB/127.4MB 11:46:03 19ede2622bd6 Downloading [================================================> ] 70.29MB/71.91MB 11:46:03 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 11:46:03 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 11:46:03 19ede2622bd6 Verifying Checksum 11:46:03 19ede2622bd6 Download complete 11:46:03 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 11:46:03 eabd8714fec9 Downloading [=========================> ] 191.4MB/375MB 11:46:03 da3ed5db7103 Downloading [==============================================> ] 118.4MB/127.4MB 11:46:03 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 11:46:03 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB 11:46:03 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB 11:46:03 81f92f6326a0 Downloading [======================> ] 6.634MB/14.63MB 11:46:03 da3ed5db7103 Verifying Checksum 11:46:03 da3ed5db7103 Download complete 11:46:03 eabd8714fec9 Downloading [===========================> ] 206MB/375MB 11:46:03 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 11:46:03 774184111a51 Verifying Checksum 11:46:03 774184111a51 Download complete 11:46:03 ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB 11:46:03 ba3bfa42d232 Download complete 11:46:03 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 11:46:03 8e7191d1a9d6 Verifying Checksum 11:46:03 8e7191d1a9d6 Download complete 11:46:03 81f92f6326a0 Verifying Checksum 11:46:03 81f92f6326a0 Download complete 11:46:03 19ede2622bd6 Extracting [===> ] 5.571MB/71.91MB 11:46:03 43449fa9f0bf Download complete 11:46:03 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 11:46:03 25fd4437207e Download complete 11:46:03 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 11:46:03 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 11:46:03 e5d7009d9e55 Download complete 11:46:03 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 11:46:03 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 11:46:03 1ec5fb03eaee Verifying Checksum 11:46:03 1ec5fb03eaee Download complete 11:46:03 eabd8714fec9 Downloading [=============================> ] 223.8MB/375MB 11:46:03 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 11:46:03 d3165a332ae3 Verifying Checksum 11:46:03 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 11:46:03 6394804c2196 Verifying Checksum 11:46:03 6394804c2196 Download complete 11:46:03 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 11:46:03 19ede2622bd6 Extracting [======> ] 10.03MB/71.91MB 11:46:03 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 11:46:03 96e38c8865ba Extracting [=================================================> ] 70.75MB/71.91MB 11:46:03 96e38c8865ba Extracting [=================================================> ] 70.75MB/71.91MB 11:46:03 eabd8714fec9 Downloading [================================> ] 241.7MB/375MB 11:46:03 f18232174bc9 Verifying Checksum 11:46:03 f18232174bc9 Download complete 11:46:03 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 11:46:03 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 11:46:03 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 11:46:03 c124ba1a8b26 Downloading [===> ] 5.946MB/91.87MB 11:46:03 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 11:46:03 e60d9caeb0b8 Verifying Checksum 11:46:03 e60d9caeb0b8 Download complete 11:46:03 19ede2622bd6 Extracting [==========> ] 14.48MB/71.91MB 11:46:03 f61a19743345 Downloading [> ] 48.06kB/3.524MB 11:46:04 eabd8714fec9 Downloading [==================================> ] 259MB/375MB 11:46:04 c124ba1a8b26 Downloading [=======> ] 13.52MB/91.87MB 11:46:04 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 11:46:04 f61a19743345 Verifying Checksum 11:46:04 f61a19743345 Download complete 11:46:04 f18232174bc9 Extracting [=======> ] 524.3kB/3.642MB 11:46:04 19ede2622bd6 Extracting [==============> ] 20.61MB/71.91MB 11:46:04 96e38c8865ba Pull complete 11:46:04 96e38c8865ba Pull complete 11:46:04 e5d7009d9e55 Extracting [==================================================>] 295B/295B 11:46:04 5e06c6bed798 Extracting [==================================================>] 296B/296B 11:46:04 5e06c6bed798 Extracting [==================================================>] 296B/296B 11:46:04 e5d7009d9e55 Extracting [==================================================>] 295B/295B 11:46:04 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 11:46:04 eabd8714fec9 Downloading [====================================> ] 274.1MB/375MB 11:46:04 c124ba1a8b26 Downloading [=============> ] 25.41MB/91.87MB 11:46:04 f18232174bc9 Extracting [========================================> ] 2.949MB/3.642MB 11:46:04 19ede2622bd6 Extracting [================> ] 23.95MB/71.91MB 11:46:04 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 11:46:04 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 11:46:04 8af57d8c9f49 Downloading [===========================> ] 4.816MB/8.735MB 11:46:04 5e06c6bed798 Pull complete 11:46:04 e5d7009d9e55 Pull complete 11:46:04 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 11:46:04 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 11:46:04 f18232174bc9 Pull complete 11:46:04 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 11:46:04 eabd8714fec9 Downloading [======================================> ] 290.3MB/375MB 11:46:04 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 11:46:04 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 11:46:04 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 11:46:04 8af57d8c9f49 Verifying Checksum 11:46:04 8af57d8c9f49 Download complete 11:46:04 c124ba1a8b26 Downloading [======================> ] 42.17MB/91.87MB 11:46:04 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 11:46:04 c53a11b7c6fc Download complete 11:46:04 19ede2622bd6 Extracting [==================> ] 27.3MB/71.91MB 11:46:04 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 11:46:04 e032d0a5e409 Verifying Checksum 11:46:04 e032d0a5e409 Download complete 11:46:04 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 11:46:04 eabd8714fec9 Downloading [========================================> ] 305.5MB/375MB 11:46:04 c124ba1a8b26 Downloading [================================> ] 58.93MB/91.87MB 11:46:04 684be6598fc9 Pull complete 11:46:04 1ec5fb03eaee Pull complete 11:46:04 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 11:46:04 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 11:46:04 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 11:46:04 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 11:46:04 e60d9caeb0b8 Pull complete 11:46:04 19ede2622bd6 Extracting [======================> ] 31.75MB/71.91MB 11:46:04 f61a19743345 Extracting [> ] 65.54kB/3.524MB 11:46:04 c49e0ee60bfb Downloading [===> ] 7.568MB/107.3MB 11:46:04 eabd8714fec9 Downloading [==========================================> ] 318.5MB/375MB 11:46:04 c124ba1a8b26 Downloading [=========================================> ] 76.23MB/91.87MB 11:46:04 19ede2622bd6 Extracting [========================> ] 35.09MB/71.91MB 11:46:04 f61a19743345 Extracting [========> ] 589.8kB/3.524MB 11:46:04 d3165a332ae3 Pull complete 11:46:04 c49e0ee60bfb Downloading [========> ] 17.3MB/107.3MB 11:46:04 eabd8714fec9 Downloading [============================================> ] 333.1MB/375MB 11:46:04 0d92cad902ba Pull complete 11:46:04 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 11:46:04 c124ba1a8b26 Downloading [===============================================> ] 87.59MB/91.87MB 11:46:04 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 11:46:04 c124ba1a8b26 Verifying Checksum 11:46:04 c124ba1a8b26 Download complete 11:46:04 19ede2622bd6 Extracting [==========================> ] 38.44MB/71.91MB 11:46:04 c49e0ee60bfb Downloading [=============> ] 28.65MB/107.3MB 11:46:04 f61a19743345 Pull complete 11:46:04 eabd8714fec9 Downloading [==============================================> ] 349.3MB/375MB 11:46:04 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 11:46:04 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 11:46:04 384497dbce3b Downloading [> ] 539.6kB/63.48MB 11:46:04 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 11:46:04 19ede2622bd6 Extracting [============================> ] 41.22MB/71.91MB 11:46:04 c49e0ee60bfb Downloading [====================> ] 44.33MB/107.3MB 11:46:04 eabd8714fec9 Downloading [================================================> ] 366MB/375MB 11:46:04 384497dbce3b Downloading [=====> ] 7.028MB/63.48MB 11:46:04 dcc0c3b2850c Extracting [======> ] 10.58MB/76.12MB 11:46:04 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 11:46:04 c124ba1a8b26 Extracting [=====> ] 9.47MB/91.87MB 11:46:04 eabd8714fec9 Verifying Checksum 11:46:04 eabd8714fec9 Download complete 11:46:04 19ede2622bd6 Extracting [==============================> ] 44.56MB/71.91MB 11:46:04 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 11:46:04 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 11:46:04 055b9255fa03 Verifying Checksum 11:46:04 055b9255fa03 Download complete 11:46:04 c49e0ee60bfb Downloading [==========================> ] 57.31MB/107.3MB 11:46:04 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 11:46:04 b176d7edde70 Verifying Checksum 11:46:04 b176d7edde70 Download complete 11:46:04 dcc0c3b2850c Extracting [=============> ] 20.61MB/76.12MB 11:46:04 384497dbce3b Downloading [========> ] 11.35MB/63.48MB 11:46:04 8af57d8c9f49 Extracting [====================> ] 3.637MB/8.735MB 11:46:04 c124ba1a8b26 Extracting [========> ] 15.6MB/91.87MB 11:46:04 9fa9226be034 Downloading [> ] 15.3kB/783kB 11:46:04 eabd8714fec9 Extracting [> ] 557.1kB/375MB 11:46:04 9fa9226be034 Downloading [==================================================>] 783kB/783kB 11:46:04 9fa9226be034 Download complete 11:46:04 9fa9226be034 Extracting [==> ] 32.77kB/783kB 11:46:04 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 11:46:04 c49e0ee60bfb Downloading [================================> ] 69.2MB/107.3MB 11:46:04 19ede2622bd6 Extracting [================================> ] 47.35MB/71.91MB 11:46:04 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 11:46:04 1617e25568b2 Verifying Checksum 11:46:04 1617e25568b2 Download complete 11:46:04 dcc0c3b2850c Extracting [==================> ] 27.85MB/76.12MB 11:46:04 384497dbce3b Downloading [==================> ] 23.25MB/63.48MB 11:46:04 8af57d8c9f49 Extracting [====================================> ] 6.39MB/8.735MB 11:46:04 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 11:46:04 c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB 11:46:04 eabd8714fec9 Extracting [=> ] 11.7MB/375MB 11:46:05 c49e0ee60bfb Downloading [======================================> ] 81.64MB/107.3MB 11:46:05 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 11:46:05 19ede2622bd6 Extracting [==================================> ] 49.58MB/71.91MB 11:46:05 9fa9226be034 Extracting [==================================================>] 783kB/783kB 11:46:05 dcc0c3b2850c Extracting [=======================> ] 35.65MB/76.12MB 11:46:05 384497dbce3b Downloading [============================> ] 35.68MB/63.48MB 11:46:05 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 11:46:05 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB 11:46:05 c124ba1a8b26 Extracting [==============> ] 26.74MB/91.87MB 11:46:05 eabd8714fec9 Extracting [==> ] 17.83MB/375MB 11:46:05 c49e0ee60bfb Downloading [===========================================> ] 94.08MB/107.3MB 11:46:05 9fa9226be034 Pull complete 11:46:05 8af57d8c9f49 Pull complete 11:46:05 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 11:46:05 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 11:46:05 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 11:46:05 dcc0c3b2850c Extracting [===========================> ] 41.22MB/76.12MB 11:46:05 384497dbce3b Downloading [========================================> ] 50.82MB/63.48MB 11:46:05 6ac0e4adf315 Downloading [=============> ] 16.22MB/62.07MB 11:46:05 19ede2622bd6 Extracting [====================================> ] 51.81MB/71.91MB 11:46:05 c124ba1a8b26 Extracting [================> ] 31.2MB/91.87MB 11:46:05 c49e0ee60bfb Verifying Checksum 11:46:05 c49e0ee60bfb Download complete 11:46:05 eabd8714fec9 Extracting [==> ] 22.28MB/375MB 11:46:05 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 11:46:05 384497dbce3b Verifying Checksum 11:46:05 384497dbce3b Download complete 11:46:05 dcc0c3b2850c Extracting [===============================> ] 47.91MB/76.12MB 11:46:05 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 11:46:05 6ac0e4adf315 Downloading [===================> ] 23.79MB/62.07MB 11:46:05 408012a7b118 Downloading [==================================================>] 637B/637B 11:46:05 408012a7b118 Verifying Checksum 11:46:05 408012a7b118 Download complete 11:46:05 19ede2622bd6 Extracting [=====================================> ] 54.59MB/71.91MB 11:46:05 c124ba1a8b26 Extracting [=====================> ] 40.11MB/91.87MB 11:46:05 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 11:46:05 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 11:46:05 44986281b8b9 Verifying Checksum 11:46:05 44986281b8b9 Download complete 11:46:05 bf70c5107ab5 Download complete 11:46:05 c53a11b7c6fc Pull complete 11:46:05 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 11:46:05 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 11:46:05 1ccde423731d Download complete 11:46:05 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 11:46:05 7221d93db8a9 Downloading [==================================================>] 100B/100B 11:46:05 7221d93db8a9 Verifying Checksum 11:46:05 7221d93db8a9 Download complete 11:46:05 f3b09c502777 Downloading [=========> ] 10.81MB/56.52MB 11:46:05 7df673c7455d Download complete 11:46:05 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 11:46:05 6ac0e4adf315 Downloading [==============================> ] 37.85MB/62.07MB 11:46:05 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 11:46:05 dcc0c3b2850c Extracting [===================================> ] 54.59MB/76.12MB 11:46:05 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 11:46:05 c124ba1a8b26 Extracting [==========================> ] 47.91MB/91.87MB 11:46:05 19ede2622bd6 Extracting [=======================================> ] 57.38MB/71.91MB 11:46:05 eabd8714fec9 Extracting [====> ] 30.64MB/375MB 11:46:05 6ac0e4adf315 Downloading [=======================================> ] 48.66MB/62.07MB 11:46:05 f3b09c502777 Downloading [================> ] 18.92MB/56.52MB 11:46:05 dcc0c3b2850c Extracting [=======================================> ] 59.6MB/76.12MB 11:46:05 c124ba1a8b26 Extracting [=============================> ] 54.59MB/91.87MB 11:46:05 2d429b9e73a6 Downloading [===========> ] 6.782MB/29.13MB 11:46:05 19ede2622bd6 Extracting [==========================================> ] 61.83MB/71.91MB 11:46:05 e032d0a5e409 Pull complete 11:46:05 1617e25568b2 Pull complete 11:46:05 eabd8714fec9 Extracting [=====> ] 38.99MB/375MB 11:46:05 6ac0e4adf315 Download complete 11:46:05 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 11:46:05 46eab5b44a35 Verifying Checksum 11:46:05 46eab5b44a35 Download complete 11:46:05 f3b09c502777 Downloading [========================> ] 27.57MB/56.52MB 11:46:05 dcc0c3b2850c Extracting [============================================> ] 67.4MB/76.12MB 11:46:05 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 11:46:05 2d429b9e73a6 Downloading [===========================> ] 16.22MB/29.13MB 11:46:05 c124ba1a8b26 Extracting [===================================> ] 65.73MB/91.87MB 11:46:05 19ede2622bd6 Extracting [=============================================> ] 65.73MB/71.91MB 11:46:05 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 11:46:05 eabd8714fec9 Extracting [======> ] 48.46MB/375MB 11:46:05 c4d302cc468d Verifying Checksum 11:46:05 c4d302cc468d Download complete 11:46:05 dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB 11:46:05 f3b09c502777 Downloading [==================================> ] 39.47MB/56.52MB 11:46:05 2d429b9e73a6 Verifying Checksum 11:46:05 2d429b9e73a6 Download complete 11:46:05 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 11:46:05 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 11:46:05 c124ba1a8b26 Extracting [=======================================> ] 72.97MB/91.87MB 11:46:05 19ede2622bd6 Extracting [================================================> ] 69.07MB/71.91MB 11:46:05 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 11:46:05 c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB 11:46:05 01e0882c90d9 Verifying Checksum 11:46:05 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 11:46:05 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 11:46:05 eabd8714fec9 Extracting [=======> ] 54.03MB/375MB 11:46:05 ed54a7dee1d8 Verifying Checksum 11:46:05 ed54a7dee1d8 Download complete 11:46:05 f3b09c502777 Downloading [================================================> ] 54.61MB/56.52MB 11:46:05 dcc0c3b2850c Pull complete 11:46:05 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 11:46:05 12c5c803443f Downloading [==================================================>] 116B/116B 11:46:05 12c5c803443f Verifying Checksum 11:46:05 12c5c803443f Download complete 11:46:05 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 11:46:05 f3b09c502777 Verifying Checksum 11:46:05 f3b09c502777 Download complete 11:46:05 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 11:46:05 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 11:46:05 e27c75a98748 Verifying Checksum 11:46:05 e27c75a98748 Download complete 11:46:05 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 11:46:05 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 11:46:05 531ee2cf3c0c Downloading [=================================================> ] 8.027MB/8.066MB 11:46:05 6ac0e4adf315 Extracting [====> ] 5.014MB/62.07MB 11:46:05 a83b68436f09 Verifying Checksum 11:46:05 a83b68436f09 Download complete 11:46:05 c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB 11:46:05 531ee2cf3c0c Verifying Checksum 11:46:05 531ee2cf3c0c Download complete 11:46:05 13ff0988aaea Downloading [==================================================>] 167B/167B 11:46:05 13ff0988aaea Verifying Checksum 11:46:05 13ff0988aaea Download complete 11:46:05 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 11:46:05 787d6bee9571 Downloading [==================================================>] 127B/127B 11:46:05 787d6bee9571 Verifying Checksum 11:46:05 787d6bee9571 Download complete 11:46:05 c49e0ee60bfb Extracting [==> ] 5.571MB/107.3MB 11:46:05 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 11:46:05 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 11:46:05 4b82842ab819 Verifying Checksum 11:46:05 4b82842ab819 Download complete 11:46:05 7e568a0dc8fb Downloading [==================================================>] 184B/184B 11:46:05 7e568a0dc8fb Verifying Checksum 11:46:05 7e568a0dc8fb Download complete 11:46:05 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB 11:46:05 eabd8714fec9 Extracting [=======> ] 59.6MB/375MB 11:46:05 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 11:46:05 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 11:46:05 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 11:46:05 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 11:46:05 e73cb4a42719 Downloading [====> ] 9.19MB/109.1MB 11:46:05 eabd8714fec9 Extracting [========> ] 67.4MB/375MB 11:46:05 c124ba1a8b26 Extracting [=============================================> ] 84.12MB/91.87MB 11:46:05 c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB 11:46:05 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB 11:46:05 19ede2622bd6 Pull complete 11:46:05 eb7cda286a15 Pull complete 11:46:05 1e017ebebdbd Downloading [====> ] 3.39MB/37.19MB 11:46:05 55f2b468da67 Downloading [=> ] 8.109MB/257.9MB 11:46:05 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB 11:46:05 api Pulled 11:46:06 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 11:46:06 e73cb4a42719 Downloading [========> ] 19.46MB/109.1MB 11:46:06 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 11:46:06 2d429b9e73a6 Extracting [==========> ] 6.193MB/29.13MB 11:46:06 eabd8714fec9 Extracting [=========> ] 71.86MB/375MB 11:46:06 1e017ebebdbd Downloading [================> ] 12.43MB/37.19MB 11:46:06 55f2b468da67 Downloading [===> ] 16.76MB/257.9MB 11:46:06 81f92f6326a0 Extracting [=> ] 491.5kB/14.63MB 11:46:06 c49e0ee60bfb Extracting [====> ] 10.03MB/107.3MB 11:46:06 c124ba1a8b26 Pull complete 11:46:06 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 11:46:06 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 11:46:06 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB 11:46:06 e73cb4a42719 Downloading [=============> ] 30.28MB/109.1MB 11:46:06 eabd8714fec9 Extracting [==========> ] 81.33MB/375MB 11:46:06 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 11:46:06 1e017ebebdbd Downloading [==============================> ] 22.99MB/37.19MB 11:46:06 55f2b468da67 Downloading [=====> ] 27.03MB/257.9MB 11:46:06 81f92f6326a0 Extracting [===========> ] 3.277MB/14.63MB 11:46:06 c49e0ee60bfb Extracting [=====> ] 12.81MB/107.3MB 11:46:06 e73cb4a42719 Downloading [===================> ] 41.63MB/109.1MB 11:46:06 6394804c2196 Pull complete 11:46:06 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB 11:46:06 eabd8714fec9 Extracting [===========> ] 86.34MB/375MB 11:46:06 2d429b9e73a6 Extracting [==================> ] 10.62MB/29.13MB 11:46:06 1e017ebebdbd Downloading [==========================================> ] 31.28MB/37.19MB 11:46:06 81f92f6326a0 Extracting [=================> ] 5.079MB/14.63MB 11:46:06 55f2b468da67 Downloading [======> ] 35.68MB/257.9MB 11:46:06 pap Pulled 11:46:06 c49e0ee60bfb Extracting [=======> ] 15.04MB/107.3MB 11:46:06 1e017ebebdbd Verifying Checksum 11:46:06 1e017ebebdbd Download complete 11:46:06 e73cb4a42719 Downloading [=======================> ] 50.28MB/109.1MB 11:46:06 82bfc142787e Downloading [> ] 97.22kB/8.613MB 11:46:06 6ac0e4adf315 Extracting [===============> ] 19.5MB/62.07MB 11:46:06 eabd8714fec9 Extracting [============> ] 92.47MB/375MB 11:46:06 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 11:46:06 55f2b468da67 Downloading [=========> ] 48.12MB/257.9MB 11:46:06 81f92f6326a0 Extracting [==========================> ] 7.7MB/14.63MB 11:46:06 c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB 11:46:06 e73cb4a42719 Downloading [============================> ] 61.64MB/109.1MB 11:46:06 82bfc142787e Downloading [==========================> ] 4.521MB/8.613MB 11:46:06 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 11:46:06 2d429b9e73a6 Extracting [==============================> ] 17.99MB/29.13MB 11:46:06 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 11:46:06 55f2b468da67 Downloading [============> ] 63.8MB/257.9MB 11:46:06 eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 11:46:06 81f92f6326a0 Extracting [=============================> ] 8.52MB/14.63MB 11:46:06 82bfc142787e Verifying Checksum 11:46:06 82bfc142787e Download complete 11:46:06 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 11:46:06 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 11:46:06 46baca71a4ef Verifying Checksum 11:46:06 46baca71a4ef Download complete 11:46:06 e73cb4a42719 Downloading [===================================> ] 77.32MB/109.1MB 11:46:06 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 11:46:06 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 11:46:06 2d429b9e73a6 Extracting [==================================> ] 20.35MB/29.13MB 11:46:06 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 11:46:06 55f2b468da67 Downloading [==============> ] 76.23MB/257.9MB 11:46:06 81f92f6326a0 Extracting [=====================================> ] 10.98MB/14.63MB 11:46:06 eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 11:46:06 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 11:46:06 e73cb4a42719 Downloading [========================================> ] 89.21MB/109.1MB 11:46:06 b0e0ef7895f4 Downloading [=======> ] 5.275MB/37.01MB 11:46:06 1e017ebebdbd Extracting [=======> ] 5.898MB/37.19MB 11:46:06 2d429b9e73a6 Extracting [======================================> ] 22.71MB/29.13MB 11:46:06 55f2b468da67 Downloading [================> ] 87.59MB/257.9MB 11:46:06 c49e0ee60bfb Extracting [=========> ] 20.05MB/107.3MB 11:46:06 eabd8714fec9 Extracting [==============> ] 107MB/375MB 11:46:06 81f92f6326a0 Extracting [=========================================> ] 12.12MB/14.63MB 11:46:06 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB 11:46:06 e73cb4a42719 Downloading [=============================================> ] 100MB/109.1MB 11:46:06 b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB 11:46:06 1e017ebebdbd Extracting [============> ] 9.044MB/37.19MB 11:46:06 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 11:46:06 55f2b468da67 Downloading [===================> ] 100.6MB/257.9MB 11:46:06 c49e0ee60bfb Extracting [==========> ] 23.4MB/107.3MB 11:46:06 eabd8714fec9 Extracting [==============> ] 109.7MB/375MB 11:46:06 e73cb4a42719 Verifying Checksum 11:46:06 e73cb4a42719 Download complete 11:46:06 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 11:46:06 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 11:46:06 c0c90eeb8aca Verifying Checksum 11:46:06 c0c90eeb8aca Download complete 11:46:06 b0e0ef7895f4 Downloading [=========================> ] 19.22MB/37.01MB 11:46:06 55f2b468da67 Downloading [======================> ] 115.2MB/257.9MB 11:46:06 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB 11:46:06 c49e0ee60bfb Extracting [============> ] 27.3MB/107.3MB 11:46:06 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 11:46:06 5cfb27c10ea5 Verifying Checksum 11:46:06 5cfb27c10ea5 Download complete 11:46:06 eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 11:46:06 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 11:46:06 40a5eed61bb0 Downloading [==================================================>] 98B/98B 11:46:06 40a5eed61bb0 Verifying Checksum 11:46:06 40a5eed61bb0 Download complete 11:46:06 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 11:46:06 b0e0ef7895f4 Downloading [========================================> ] 30.15MB/37.01MB 11:46:06 e040ea11fa10 Downloading [==================================================>] 173B/173B 11:46:06 e040ea11fa10 Verifying Checksum 11:46:06 e040ea11fa10 Download complete 11:46:06 55f2b468da67 Downloading [=========================> ] 129.2MB/257.9MB 11:46:06 1e017ebebdbd Extracting [================> ] 12.19MB/37.19MB 11:46:07 c49e0ee60bfb Extracting [==============> ] 30.64MB/107.3MB 11:46:07 eabd8714fec9 Extracting [===============> ] 117MB/375MB 11:46:07 b0e0ef7895f4 Verifying Checksum 11:46:07 b0e0ef7895f4 Download complete 11:46:07 2d429b9e73a6 Extracting [===============================================> ] 27.43MB/29.13MB 11:46:07 6ac0e4adf315 Extracting [==================================> ] 42.89MB/62.07MB 11:46:07 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 11:46:07 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 11:46:07 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 11:46:07 356f5c2c843b Verifying Checksum 11:46:07 356f5c2c843b Download complete 11:46:07 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 11:46:07 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 11:46:07 1e017ebebdbd Extracting [=====================> ] 15.73MB/37.19MB 11:46:07 c49e0ee60bfb Extracting [===============> ] 33.98MB/107.3MB 11:46:07 81f92f6326a0 Pull complete 11:46:07 eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 11:46:07 6ac0e4adf315 Extracting [=======================================> ] 49.58MB/62.07MB 11:46:07 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 11:46:07 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 11:46:07 09d5a3f70313 Downloading [=> ] 2.702MB/109.2MB 11:46:07 f90c8eb4724c Downloading [========> ] 4.98MB/30.59MB 11:46:07 55f2b468da67 Downloading [=============================> ] 154.1MB/257.9MB 11:46:07 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 11:46:07 6ac0e4adf315 Extracting [===============================================> ] 58.49MB/62.07MB 11:46:07 eabd8714fec9 Extracting [================> ] 123.7MB/375MB 11:46:07 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 11:46:07 c49e0ee60bfb Extracting [=================> ] 36.77MB/107.3MB 11:46:07 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB 11:46:07 f90c8eb4724c Downloading [====================> ] 12.76MB/30.59MB 11:46:07 55f2b468da67 Downloading [===============================> ] 164.4MB/257.9MB 11:46:07 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB 11:46:07 eabd8714fec9 Extracting [================> ] 125.9MB/375MB 11:46:07 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 11:46:07 774184111a51 Pull complete 11:46:07 c49e0ee60bfb Extracting [==================> ] 39.55MB/107.3MB 11:46:07 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 11:46:07 09d5a3f70313 Downloading [======> ] 14.06MB/109.2MB 11:46:07 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 11:46:07 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 11:46:07 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 11:46:07 f90c8eb4724c Downloading [===================================> ] 21.79MB/30.59MB 11:46:07 55f2b468da67 Downloading [==================================> ] 177.9MB/257.9MB 11:46:07 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 11:46:07 2d429b9e73a6 Pull complete 11:46:07 eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 11:46:07 c49e0ee60bfb Extracting [====================> ] 43.45MB/107.3MB 11:46:07 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 11:46:07 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 11:46:07 6ac0e4adf315 Pull complete 11:46:07 09d5a3f70313 Downloading [===========> ] 24.33MB/109.2MB 11:46:07 f90c8eb4724c Verifying Checksum 11:46:07 f90c8eb4724c Download complete 11:46:07 55f2b468da67 Downloading [=====================================> ] 191.4MB/257.9MB 11:46:07 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB 11:46:07 ba3bfa42d232 Pull complete 11:46:07 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 11:46:07 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 11:46:07 c49e0ee60bfb Extracting [=====================> ] 46.79MB/107.3MB 11:46:07 eabd8714fec9 Extracting [=================> ] 132MB/375MB 11:46:07 09d5a3f70313 Downloading [================> ] 36.76MB/109.2MB 11:46:07 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 11:46:07 46eab5b44a35 Pull complete 11:46:07 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 11:46:07 55f2b468da67 Downloading [========================================> ] 207.6MB/257.9MB 11:46:07 1e017ebebdbd Extracting [===========================================> ] 32.64MB/37.19MB 11:46:07 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 11:46:07 8e7191d1a9d6 Pull complete 11:46:07 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 11:46:07 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 11:46:07 09d5a3f70313 Downloading [========================> ] 52.44MB/109.2MB 11:46:07 c49e0ee60bfb Extracting [======================> ] 49.02MB/107.3MB 11:46:07 eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 11:46:07 f3b09c502777 Extracting [==> ] 2.785MB/56.52MB 11:46:07 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 11:46:07 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 11:46:07 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB 11:46:07 f90c8eb4724c Extracting [====> ] 2.949MB/30.59MB 11:46:07 09d5a3f70313 Downloading [=============================> ] 65.42MB/109.2MB 11:46:07 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 11:46:07 c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB 11:46:07 f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 11:46:07 c4d302cc468d Extracting [==========================================> ] 3.867MB/4.534MB 11:46:07 55f2b468da67 Downloading [=============================================> ] 235.7MB/257.9MB 11:46:07 43449fa9f0bf Pull complete 11:46:07 f90c8eb4724c Extracting [==========> ] 6.226MB/30.59MB 11:46:07 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 11:46:07 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 11:46:07 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 11:46:07 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 11:46:07 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 11:46:07 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 11:46:07 09d5a3f70313 Downloading [===================================> ] 77.86MB/109.2MB 11:46:07 c4d302cc468d Pull complete 11:46:07 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 11:46:07 eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 11:46:07 f3b09c502777 Extracting [======> ] 7.242MB/56.52MB 11:46:07 c49e0ee60bfb Extracting [========================> ] 53.48MB/107.3MB 11:46:07 55f2b468da67 Downloading [================================================> ] 247.6MB/257.9MB 11:46:07 2b1b549e99de Verifying Checksum 11:46:07 2b1b549e99de Download complete 11:46:07 f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 11:46:07 09d5a3f70313 Downloading [======================================> ] 84.34MB/109.2MB 11:46:07 1e017ebebdbd Pull complete 11:46:08 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 11:46:08 eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 11:46:08 f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB 11:46:08 55f2b468da67 Downloading [=================================================> ] 253MB/257.9MB 11:46:08 c49e0ee60bfb Extracting [=========================> ] 55.71MB/107.3MB 11:46:08 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 11:46:08 55f2b468da67 Verifying Checksum 11:46:08 55f2b468da67 Download complete 11:46:08 01e0882c90d9 Pull complete 11:46:08 25fd4437207e Pull complete 11:46:08 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 11:46:08 f90c8eb4724c Extracting [=================> ] 10.49MB/30.59MB 11:46:08 09d5a3f70313 Downloading [===========================================> ] 95.16MB/109.2MB 11:46:08 policy-db-migrator Pulled 11:46:08 547372ea8ffa Downloading [> ] 130kB/12.63MB 11:46:08 eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 11:46:08 f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 11:46:08 c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB 11:46:08 f90c8eb4724c Extracting [========================> ] 15.07MB/30.59MB 11:46:08 09d5a3f70313 Downloading [=================================================> ] 107.6MB/109.2MB 11:46:08 09d5a3f70313 Verifying Checksum 11:46:08 09d5a3f70313 Download complete 11:46:08 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 11:46:08 547372ea8ffa Downloading [=============================> ] 7.339MB/12.63MB 11:46:08 f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 11:46:08 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 11:46:08 eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 11:46:08 c49e0ee60bfb Extracting [=============================> ] 62.39MB/107.3MB 11:46:08 547372ea8ffa Verifying Checksum 11:46:08 547372ea8ffa Download complete 11:46:08 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 11:46:08 4f4fb700ef54 Downloading [==================================================>] 32B/32B 11:46:08 4f4fb700ef54 Verifying Checksum 11:46:08 4f4fb700ef54 Download complete 11:46:08 f90c8eb4724c Extracting [=============================> ] 18.35MB/30.59MB 11:46:08 531ee2cf3c0c Extracting [==========================> ] 4.325MB/8.066MB 11:46:08 55f2b468da67 Extracting [==> ] 11.7MB/257.9MB 11:46:08 eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 11:46:08 f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 11:46:08 65d25c0f02f3 Downloading [================> ] 9.731MB/28.98MB 11:46:08 c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB 11:46:08 f90c8eb4724c Extracting [===================================> ] 21.63MB/30.59MB 11:46:08 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 11:46:08 531ee2cf3c0c Extracting [===================================> ] 5.8MB/8.066MB 11:46:08 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 11:46:08 eabd8714fec9 Extracting [====================> ] 154.9MB/375MB 11:46:08 65d25c0f02f3 Downloading [=================================> ] 19.46MB/28.98MB 11:46:08 f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 11:46:08 c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB 11:46:08 f90c8eb4724c Extracting [=======================================> ] 24.25MB/30.59MB 11:46:08 90dd78f85976 Downloading [=========> ] 8.093MB/41.49MB 11:46:08 531ee2cf3c0c Extracting [==============================================> ] 7.569MB/8.066MB 11:46:08 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 11:46:08 55f2b468da67 Extracting [====> ] 22.84MB/257.9MB 11:46:08 eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 11:46:08 65d25c0f02f3 Downloading [================================================> ] 28.02MB/28.98MB 11:46:08 f3b09c502777 Extracting [==================> ] 20.61MB/56.52MB 11:46:08 c49e0ee60bfb Extracting [================================> ] 69.63MB/107.3MB 11:46:08 65d25c0f02f3 Verifying Checksum 11:46:08 65d25c0f02f3 Download complete 11:46:08 90dd78f85976 Downloading [======================> ] 18.74MB/41.49MB 11:46:08 f3b09c502777 Extracting [====================> ] 22.84MB/56.52MB 11:46:08 eabd8714fec9 Extracting [=====================> ] 158.8MB/375MB 11:46:08 c49e0ee60bfb Extracting [=================================> ] 71.3MB/107.3MB 11:46:08 90dd78f85976 Downloading [============================> ] 23.85MB/41.49MB 11:46:08 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 11:46:08 f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 11:46:08 c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB 11:46:08 90dd78f85976 Downloading [=========================================> ] 34.5MB/41.49MB 11:46:08 eabd8714fec9 Extracting [=====================> ] 161MB/375MB 11:46:08 531ee2cf3c0c Pull complete 11:46:08 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 11:46:08 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 11:46:08 f90c8eb4724c Extracting [============================================> ] 27.2MB/30.59MB 11:46:08 f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 11:46:08 90dd78f85976 Verifying Checksum 11:46:08 90dd78f85976 Download complete 11:46:08 c49e0ee60bfb Extracting [===================================> ] 75.76MB/107.3MB 11:46:08 eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB 11:46:08 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB 11:46:09 f3b09c502777 Extracting [=================================> ] 37.32MB/56.52MB 11:46:09 f90c8eb4724c Extracting [===============================================> ] 29.16MB/30.59MB 11:46:09 c49e0ee60bfb Extracting [====================================> ] 78.54MB/107.3MB 11:46:09 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 11:46:09 eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 11:46:09 55f2b468da67 Extracting [=======> ] 37.32MB/257.9MB 11:46:09 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 11:46:09 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 11:46:09 f3b09c502777 Extracting [======================================> ] 43.45MB/56.52MB 11:46:09 f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB 11:46:09 c49e0ee60bfb Extracting [=====================================> ] 81.33MB/107.3MB 11:46:09 eabd8714fec9 Extracting [======================> ] 168.8MB/375MB 11:46:09 55f2b468da67 Extracting [========> ] 45.12MB/257.9MB 11:46:09 eabd8714fec9 Extracting [======================> ] 171MB/375MB 11:46:09 f3b09c502777 Extracting [============================================> ] 50.69MB/56.52MB 11:46:09 c49e0ee60bfb Extracting [======================================> ] 83MB/107.3MB 11:46:09 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 11:46:09 55f2b468da67 Extracting [=========> ] 47.35MB/257.9MB 11:46:09 eabd8714fec9 Extracting [========================> ] 182.7MB/375MB 11:46:09 c49e0ee60bfb Extracting [========================================> ] 86.34MB/107.3MB 11:46:09 f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 11:46:09 55f2b468da67 Extracting [==========> ] 55.71MB/257.9MB 11:46:09 eabd8714fec9 Extracting [=========================> ] 190.5MB/375MB 11:46:09 c49e0ee60bfb Extracting [============================================> ] 94.7MB/107.3MB 11:46:09 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 11:46:09 55f2b468da67 Extracting [============> ] 64.06MB/257.9MB 11:46:09 eabd8714fec9 Extracting [==========================> ] 200MB/375MB 11:46:09 55f2b468da67 Extracting [==============> ] 75.76MB/257.9MB 11:46:09 c49e0ee60bfb Extracting [===============================================> ] 100.8MB/107.3MB 11:46:09 eabd8714fec9 Extracting [============================> ] 211.7MB/375MB 11:46:09 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB 11:46:09 c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB 11:46:09 eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 11:46:09 55f2b468da67 Extracting [===================> ] 98.04MB/257.9MB 11:46:09 c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 11:46:09 55f2b468da67 Extracting [====================> ] 105.8MB/257.9MB 11:46:09 eabd8714fec9 Extracting [=============================> ] 221.7MB/375MB 11:46:09 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 11:46:10 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB 11:46:10 eabd8714fec9 Extracting [=============================> ] 222.8MB/375MB 11:46:10 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB 11:46:10 eabd8714fec9 Extracting [==============================> ] 226.2MB/375MB 11:46:10 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB 11:46:10 eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB 11:46:10 55f2b468da67 Extracting [========================> ] 127MB/257.9MB 11:46:10 eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB 11:46:10 eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 11:46:10 55f2b468da67 Extracting [========================> ] 128.1MB/257.9MB 11:46:10 eabd8714fec9 Extracting [================================> ] 244MB/375MB 11:46:10 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB 11:46:10 55f2b468da67 Extracting [==========================> ] 139.3MB/257.9MB 11:46:10 eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB 11:46:10 eabd8714fec9 Extracting [==================================> ] 255.1MB/375MB 11:46:10 55f2b468da67 Extracting [===========================> ] 144.3MB/257.9MB 11:46:10 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB 11:46:10 eabd8714fec9 Extracting [==================================> ] 261.3MB/375MB 11:46:11 ed54a7dee1d8 Pull complete 11:46:11 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 11:46:11 eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 11:46:11 f90c8eb4724c Pull complete 11:46:11 f3b09c502777 Pull complete 11:46:11 eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB 11:46:11 55f2b468da67 Extracting [=============================> ] 151.5MB/257.9MB 11:46:11 eabd8714fec9 Extracting [===================================> ] 265.2MB/375MB 11:46:11 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB 11:46:11 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB 11:46:11 eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 11:46:11 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB 11:46:11 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 11:46:11 eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 11:46:11 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 11:46:12 12c5c803443f Extracting [==================================================>] 116B/116B 11:46:12 12c5c803443f Extracting [==================================================>] 116B/116B 11:46:12 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 11:46:12 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 11:46:12 c49e0ee60bfb Pull complete 11:46:12 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 11:46:12 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 11:46:12 408012a7b118 Extracting [==================================================>] 637B/637B 11:46:12 408012a7b118 Extracting [==================================================>] 637B/637B 11:46:12 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 11:46:12 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 11:46:12 eabd8714fec9 Extracting [=====================================> ] 279.6MB/375MB 11:46:12 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 11:46:12 eabd8714fec9 Extracting [=====================================> ] 284.7MB/375MB 11:46:12 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 11:46:13 eabd8714fec9 Extracting [======================================> ] 289.7MB/375MB 11:46:13 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB 11:46:13 eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB 11:46:13 55f2b468da67 Extracting [====================================> ] 190MB/257.9MB 11:46:13 eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB 11:46:13 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB 11:46:13 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 11:46:13 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 11:46:13 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 11:46:13 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 11:46:13 eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 11:46:13 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 11:46:13 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 11:46:13 eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 11:46:13 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 11:46:13 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 11:46:14 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 11:46:14 eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 11:46:14 384497dbce3b Extracting [> ] 557.1kB/63.48MB 11:46:14 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 11:46:14 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 11:46:14 12c5c803443f Pull complete 11:46:14 408012a7b118 Pull complete 11:46:14 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB 11:46:14 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 11:46:14 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 11:46:14 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 11:46:14 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 11:46:14 2b1b549e99de Pull complete 11:46:14 eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 11:46:14 384497dbce3b Extracting [> ] 1.114MB/63.48MB 11:46:14 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 11:46:14 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB 11:46:14 e27c75a98748 Pull complete 11:46:14 eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 11:46:15 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 11:46:15 44986281b8b9 Pull complete 11:46:15 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 11:46:15 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 11:46:15 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB 11:46:15 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 11:46:15 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 11:46:15 547372ea8ffa Extracting [===============> ] 3.801MB/12.63MB 11:46:15 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 11:46:15 547372ea8ffa Extracting [==============================> ] 7.602MB/12.63MB 11:46:15 e73cb4a42719 Extracting [=> ] 2.228MB/109.1MB 11:46:15 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 11:46:15 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 11:46:15 eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 11:46:15 e73cb4a42719 Extracting [==> ] 5.014MB/109.1MB 11:46:15 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 11:46:15 e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 11:46:15 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 11:46:15 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 11:46:15 e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB 11:46:15 bf70c5107ab5 Pull complete 11:46:15 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 11:46:15 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 11:46:15 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 11:46:16 e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 11:46:16 547372ea8ffa Pull complete 11:46:16 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 11:46:16 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 11:46:16 384497dbce3b Extracting [===> ] 3.899MB/63.48MB 11:46:16 e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 11:46:16 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB 11:46:16 eabd8714fec9 Extracting [==========================================> ] 315.3MB/375MB 11:46:16 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 11:46:16 e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB 11:46:16 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 11:46:16 65d25c0f02f3 Extracting [=======> ] 4.424MB/28.98MB 11:46:16 eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB 11:46:16 e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 11:46:16 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 11:46:16 65d25c0f02f3 Extracting [============> ] 7.373MB/28.98MB 11:46:16 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 11:46:16 e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 11:46:16 1ccde423731d Pull complete 11:46:16 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 11:46:16 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB 11:46:16 65d25c0f02f3 Extracting [==================> ] 10.62MB/28.98MB 11:46:16 384497dbce3b Extracting [=====> ] 7.242MB/63.48MB 11:46:16 e73cb4a42719 Extracting [=========> ] 20.61MB/109.1MB 11:46:16 65d25c0f02f3 Extracting [=========================> ] 14.75MB/28.98MB 11:46:16 65d25c0f02f3 Extracting [======================================> ] 22.41MB/28.98MB 11:46:16 e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB 11:46:16 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB 11:46:16 eabd8714fec9 Extracting [==========================================> ] 322MB/375MB 11:46:16 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 11:46:16 7221d93db8a9 Extracting [==================================================>] 100B/100B 11:46:16 7221d93db8a9 Extracting [==================================================>] 100B/100B 11:46:16 e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB 11:46:16 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 11:46:16 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 11:46:16 55f2b468da67 Extracting [==========================================> ] 218.4MB/257.9MB 11:46:17 65d25c0f02f3 Pull complete 11:46:17 eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB 11:46:17 e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 11:46:17 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB 11:46:17 e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 11:46:17 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 11:46:17 eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 11:46:17 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 11:46:17 90dd78f85976 Extracting [> ] 426kB/41.49MB 11:46:17 e73cb4a42719 Extracting [===============> ] 32.87MB/109.1MB 11:46:17 90dd78f85976 Extracting [======> ] 5.112MB/41.49MB 11:46:17 e73cb4a42719 Extracting [==================> ] 40.67MB/109.1MB 11:46:17 90dd78f85976 Extracting [============> ] 10.65MB/41.49MB 11:46:17 e73cb4a42719 Extracting [======================> ] 48.46MB/109.1MB 11:46:17 90dd78f85976 Extracting [=================> ] 14.91MB/41.49MB 11:46:17 55f2b468da67 Extracting [===========================================> ] 222.3MB/257.9MB 11:46:17 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 11:46:17 7221d93db8a9 Pull complete 11:46:17 7df673c7455d Extracting [==================================================>] 694B/694B 11:46:17 eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 11:46:17 7df673c7455d Extracting [==================================================>] 694B/694B 11:46:17 e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 11:46:17 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 11:46:17 90dd78f85976 Extracting [====================> ] 17.04MB/41.49MB 11:46:17 384497dbce3b Extracting [========> ] 11.14MB/63.48MB 11:46:17 e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 11:46:17 90dd78f85976 Extracting [========================> ] 20.45MB/41.49MB 11:46:17 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB 11:46:17 eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 11:46:17 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 11:46:17 384497dbce3b Extracting [==========> ] 13.37MB/63.48MB 11:46:17 90dd78f85976 Extracting [===============================> ] 25.99MB/41.49MB 11:46:17 eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 11:46:17 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 11:46:17 e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 11:46:17 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 11:46:18 90dd78f85976 Extracting [=======================================> ] 32.8MB/41.49MB 11:46:18 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 11:46:18 90dd78f85976 Extracting [=============================================> ] 37.49MB/41.49MB 11:46:18 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 11:46:18 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 11:46:18 7df673c7455d Pull complete 11:46:18 e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 11:46:18 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 11:46:18 90dd78f85976 Extracting [================================================> ] 40.47MB/41.49MB 11:46:18 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 11:46:18 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 11:46:18 e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 11:46:18 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 11:46:18 384497dbce3b Extracting [==============> ] 17.83MB/63.48MB 11:46:18 eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 11:46:18 e73cb4a42719 Extracting [===============================> ] 67.96MB/109.1MB 11:46:18 384497dbce3b Extracting [===============> ] 20.05MB/63.48MB 11:46:18 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB 11:46:18 eabd8714fec9 Extracting [============================================> ] 336.5MB/375MB 11:46:18 e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 11:46:18 384497dbce3b Extracting [==================> ] 23.4MB/63.48MB 11:46:18 e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 11:46:18 384497dbce3b Extracting [=====================> ] 27.3MB/63.48MB 11:46:18 e73cb4a42719 Extracting [======================================> ] 83.56MB/109.1MB 11:46:18 eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB 11:46:19 e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 11:46:19 384497dbce3b Extracting [========================> ] 30.64MB/63.48MB 11:46:19 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 11:46:19 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 11:46:19 90dd78f85976 Pull complete 11:46:19 e73cb4a42719 Extracting [=======================================> ] 86.9MB/109.1MB 11:46:19 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB 11:46:19 prometheus Pulled 11:46:19 e73cb4a42719 Extracting [========================================> ] 88.01MB/109.1MB 11:46:19 384497dbce3b Extracting [=========================> ] 32.31MB/63.48MB 11:46:19 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 11:46:19 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 11:46:19 e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB 11:46:19 384497dbce3b Extracting [===========================> ] 34.54MB/63.48MB 11:46:19 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 11:46:19 e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 11:46:19 384497dbce3b Extracting [============================> ] 36.77MB/63.48MB 11:46:19 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 11:46:19 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB 11:46:19 e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB 11:46:19 384497dbce3b Extracting [===============================> ] 39.55MB/63.48MB 11:46:19 55f2b468da67 Extracting [===============================================> ] 244MB/257.9MB 11:46:19 e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB 11:46:19 384497dbce3b Extracting [=================================> ] 42.34MB/63.48MB 11:46:19 384497dbce3b Extracting [====================================> ] 46.24MB/63.48MB 11:46:20 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB 11:46:20 e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 11:46:20 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 11:46:20 4f4fb700ef54 Extracting [==================================================>] 32B/32B 11:46:20 4f4fb700ef54 Extracting [==================================================>] 32B/32B 11:46:20 384497dbce3b Extracting [========================================> ] 51.25MB/63.48MB 11:46:20 e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 11:46:20 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 11:46:20 384497dbce3b Extracting [==========================================> ] 54.03MB/63.48MB 11:46:20 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 11:46:20 e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB 11:46:20 384497dbce3b Extracting [==========================================> ] 54.59MB/63.48MB 11:46:20 55f2b468da67 Extracting [================================================> ] 248.4MB/257.9MB 11:46:20 eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB 11:46:20 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB 11:46:20 384497dbce3b Extracting [==============================================> ] 58.49MB/63.48MB 11:46:20 eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 11:46:20 e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 11:46:20 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 11:46:21 e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 11:46:21 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 11:46:21 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB 11:46:21 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 11:46:21 e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 11:46:21 4f4fb700ef54 Pull complete 11:46:21 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 11:46:21 e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 11:46:21 eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 11:46:21 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 11:46:21 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB 11:46:21 eabd8714fec9 Extracting [==============================================> ] 348.7MB/375MB 11:46:22 eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 11:46:22 eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 11:46:22 384497dbce3b Extracting [=================================================> ] 62.95MB/63.48MB 11:46:22 eabd8714fec9 Extracting [==============================================> ] 352.1MB/375MB 11:46:22 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 11:46:22 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 11:46:22 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 11:46:22 eabd8714fec9 Extracting [===============================================> ] 354.3MB/375MB 11:46:22 e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB 11:46:22 55f2b468da67 Pull complete 11:46:22 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 11:46:23 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 11:46:23 eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 11:46:23 eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB 11:46:23 eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB 11:46:23 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 11:46:24 opa-pdp Pulled 11:46:24 82bfc142787e Extracting [> ] 98.3kB/8.613MB 11:46:24 82bfc142787e Extracting [=================================> ] 5.702MB/8.613MB 11:46:24 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 11:46:25 384497dbce3b Pull complete 11:46:26 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 11:46:26 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 11:46:26 e73cb4a42719 Pull complete 11:46:26 eabd8714fec9 Pull complete 11:46:26 82bfc142787e Pull complete 11:46:26 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 11:46:26 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 11:46:26 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 11:46:26 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 11:46:26 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 11:46:26 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 11:46:26 45fd2fec8a19 Pull complete 11:46:26 055b9255fa03 Pull complete 11:46:26 a83b68436f09 Pull complete 11:46:26 46baca71a4ef Pull complete 11:46:26 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 11:46:26 787d6bee9571 Extracting [==================================================>] 127B/127B 11:46:26 787d6bee9571 Extracting [==================================================>] 127B/127B 11:46:26 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 11:46:26 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 11:46:26 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 11:46:26 8f10199ed94b Extracting [============> ] 2.163MB/8.768MB 11:46:26 b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB 11:46:26 787d6bee9571 Pull complete 11:46:26 b176d7edde70 Pull complete 11:46:26 13ff0988aaea Extracting [==================================================>] 167B/167B 11:46:26 13ff0988aaea Extracting [==================================================>] 167B/167B 11:46:26 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 11:46:26 grafana Pulled 11:46:26 8f10199ed94b Pull complete 11:46:26 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 11:46:26 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 11:46:26 b0e0ef7895f4 Extracting [===========================> ] 20.45MB/37.01MB 11:46:26 13ff0988aaea Pull complete 11:46:26 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 11:46:26 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 11:46:26 f963a77d2726 Pull complete 11:46:26 b0e0ef7895f4 Extracting [=================================================> ] 36.96MB/37.01MB 11:46:26 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 11:46:26 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 11:46:26 4b82842ab819 Pull complete 11:46:26 b0e0ef7895f4 Pull complete 11:46:26 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 11:46:26 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 11:46:26 7e568a0dc8fb Extracting [==================================================>] 184B/184B 11:46:26 7e568a0dc8fb Extracting [==================================================>] 184B/184B 11:46:26 f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB 11:46:26 7e568a0dc8fb Pull complete 11:46:26 c0c90eeb8aca Pull complete 11:46:26 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 11:46:26 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 11:46:26 postgres Pulled 11:46:26 f3a82e9f1761 Extracting [================================> ] 28.9MB/44.41MB 11:46:26 5cfb27c10ea5 Pull complete 11:46:26 40a5eed61bb0 Extracting [==================================================>] 98B/98B 11:46:26 40a5eed61bb0 Extracting [==================================================>] 98B/98B 11:46:26 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 11:46:26 f3a82e9f1761 Pull complete 11:46:26 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 11:46:26 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 11:46:26 40a5eed61bb0 Pull complete 11:46:26 e040ea11fa10 Extracting [==================================================>] 173B/173B 11:46:26 e040ea11fa10 Extracting [==================================================>] 173B/173B 11:46:27 79161a3f5362 Pull complete 11:46:27 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 11:46:27 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 11:46:27 e040ea11fa10 Pull complete 11:46:27 9c266ba63f51 Pull complete 11:46:27 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 11:46:27 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 11:46:27 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 11:46:27 2e8a7df9c2ee Pull complete 11:46:27 10f05dd8b1db Extracting [==================================================>] 98B/98B 11:46:27 10f05dd8b1db Extracting [==================================================>] 98B/98B 11:46:27 09d5a3f70313 Extracting [======> ] 13.93MB/109.2MB 11:46:27 09d5a3f70313 Extracting [==============> ] 30.64MB/109.2MB 11:46:27 10f05dd8b1db Pull complete 11:46:27 41dac8b43ba6 Extracting [==================================================>] 171B/171B 11:46:27 41dac8b43ba6 Extracting [==================================================>] 171B/171B 11:46:27 09d5a3f70313 Extracting [=======================> ] 50.69MB/109.2MB 11:46:27 41dac8b43ba6 Pull complete 11:46:27 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 11:46:27 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 11:46:27 09d5a3f70313 Extracting [===============================> ] 67.96MB/109.2MB 11:46:27 71a9f6a9ab4d Pull complete 11:46:27 09d5a3f70313 Extracting [======================================> ] 83MB/109.2MB 11:46:27 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 11:46:27 09d5a3f70313 Extracting [=============================================> ] 98.6MB/109.2MB 11:46:27 da3ed5db7103 Extracting [=====> ] 13.93MB/127.4MB 11:46:27 09d5a3f70313 Extracting [================================================> ] 106.4MB/109.2MB 11:46:28 da3ed5db7103 Extracting [===========> ] 28.41MB/127.4MB 11:46:28 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 11:46:28 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 11:46:28 09d5a3f70313 Pull complete 11:46:28 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 11:46:28 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 11:46:28 da3ed5db7103 Extracting [===============> ] 38.99MB/127.4MB 11:46:28 da3ed5db7103 Extracting [======================> ] 56.82MB/127.4MB 11:46:28 356f5c2c843b Pull complete 11:46:28 kafka Pulled 11:46:28 da3ed5db7103 Extracting [=============================> ] 75.76MB/127.4MB 11:46:28 da3ed5db7103 Extracting [======================================> ] 98.6MB/127.4MB 11:46:28 da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB 11:46:28 da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB 11:46:28 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 11:46:28 da3ed5db7103 Pull complete 11:46:28 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 11:46:28 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 11:46:28 c955f6e31a04 Pull complete 11:46:28 zookeeper Pulled 11:46:28 Network compose_default Creating 11:46:28 Network compose_default Created 11:46:28 Container prometheus Creating 11:46:28 Container postgres Creating 11:46:28 Container zookeeper Creating 11:46:46 Container prometheus Created 11:46:46 Container grafana Creating 11:46:46 Container postgres Created 11:46:46 Container policy-db-migrator Creating 11:46:46 Container zookeeper Created 11:46:46 Container kafka Creating 11:46:46 Container policy-db-migrator Created 11:46:46 Container policy-api Creating 11:46:46 Container grafana Created 11:46:46 Container kafka Created 11:46:46 Container policy-api Created 11:46:46 Container policy-pap Creating 11:46:46 Container policy-pap Created 11:46:46 Container policy-opa-pdp Creating 11:46:46 Container policy-opa-pdp Created 11:46:46 Container zookeeper Starting 11:46:46 Container prometheus Starting 11:46:46 Container postgres Starting 11:46:47 Container zookeeper Started 11:46:47 Container kafka Starting 11:46:48 Container kafka Started 11:46:48 Container postgres Started 11:46:48 Container policy-db-migrator Starting 11:46:49 Container policy-db-migrator Started 11:46:49 Container policy-api Starting 11:46:49 Container prometheus Started 11:46:49 Container grafana Starting 11:46:50 Container policy-api Started 11:46:50 Container policy-pap Starting 11:46:51 Container policy-pap Started 11:46:51 Container policy-opa-pdp Starting 11:46:52 Container policy-opa-pdp Started 11:46:52 Container grafana Started 11:46:52 Prometheus server: http://localhost:30259 11:46:52 Grafana server: http://localhost:30269 11:46:52 Waiting 3 minutes for OPA-PDP to start... 11:49:52 Checking if REST port 30003 is open on localhost ... 11:49:52 IMAGE NAMES STATUS 11:49:52 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:49:52 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:49:52 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:49:52 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:49:52 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:49:52 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:49:52 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:49:52 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:49:52 Checking if REST port 30012 is open on localhost ... 11:49:52 IMAGE NAMES STATUS 11:49:52 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:49:52 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:49:52 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:49:52 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:49:52 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:49:52 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:49:52 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:49:52 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:49:52 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/resources/tests/models'... 11:49:53 Building robot framework docker image 11:50:31 sha256:afbc8e811338be52e8e793ff7bd1e20da001e6cedaac667ffcf9841b8746ba8c 11:50:34 top - 11:50:34 up 6 min, 0 users, load average: 1.14, 1.15, 0.60 11:50:34 Tasks: 219 total, 1 running, 148 sleeping, 0 stopped, 0 zombie 11:50:34 %Cpu(s): 9.7 us, 2.2 sy, 0.0 ni, 84.4 id, 3.6 wa, 0.0 hi, 0.1 si, 0.1 st 11:50:34 11:50:34 total used free shared buff/cache available 11:50:34 Mem: 31G 2.3G 21G 28M 7.3G 28G 11:50:34 Swap: 1.0G 0B 1.0G 11:50:34 11:50:35 IMAGE NAMES STATUS 11:50:35 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:50:35 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:50:35 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:50:35 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:50:35 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:50:35 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:50:35 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:50:35 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:50:35 11:50:37 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:50:37 baf39881ae3a policy-opa-pdp 0.30% 12.92MiB / 31.41GiB 0.04% 82.1kB / 79.3kB 0B / 0B 20 11:50:37 68ccd4bd593c policy-pap 0.68% 484.1MiB / 31.41GiB 1.50% 2.21MB / 1.23MB 0B / 139MB 69 11:50:37 69516802d014 policy-api 0.14% 399.4MiB / 31.41GiB 1.24% 1.15MB / 1.05MB 0B / 0B 60 11:50:37 60e86e26928d kafka 2.31% 393.4MiB / 31.41GiB 1.22% 310kB / 292kB 8.19kB / 692kB 83 11:50:37 3b716bb711b4 grafana 0.22% 117.6MiB / 31.41GiB 0.37% 19.1MB / 181kB 0B / 31.7MB 20 11:50:37 dd5b834b7b6d zookeeper 0.08% 84.77MiB / 31.41GiB 0.26% 56.9kB / 51.4kB 229kB / 426kB 62 11:50:37 0ecb3f986312 postgres 0.02% 86.42MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 0B / 159MB 26 11:50:37 2a60ac360ea8 prometheus 0.19% 21.11MiB / 31.41GiB 0.07% 204kB / 10.2kB 0B / 0B 12 11:50:37 11:50:37 Container policy-csit Creating 11:50:37 Container policy-csit Created 11:50:37 Attaching to policy-csit 11:50:38 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 11:50:38 policy-csit | Run Robot test 11:50:38 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 11:50:38 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 11:50:38 policy-csit | -v POLICY_API_IP:policy-api:6969 11:50:38 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 11:50:38 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 11:50:38 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 11:50:38 policy-csit | -v APEX_IP:policy-apex-pdp:6969 11:50:38 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 11:50:38 policy-csit | -v KAFKA_IP:kafka:9092 11:50:38 policy-csit | -v PROMETHEUS_IP:prometheus:9090 11:50:38 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 11:50:38 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 11:50:38 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 11:50:38 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 11:50:38 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 11:50:38 policy-csit | -v TEMP_FOLDER:/tmp/distribution 11:50:38 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 11:50:38 policy-csit | -v TEST_ENV:docker 11:50:38 policy-csit | -v JAEGER_IP:jaeger:16686 11:50:38 policy-csit | Starting Robot test suites ... 11:50:38 policy-csit | ============================================================================== 11:50:38 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 11:50:38 policy-csit | ============================================================================== 11:50:38 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 11:50:38 policy-csit | ============================================================================== 11:50:38 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 11:50:38 policy-csit | ------------------------------------------------------------------------------ 11:50:38 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 11:50:38 policy-csit | ------------------------------------------------------------------------------ 11:51:05 policy-csit | ValidatesZonePolicy | PASS | 11:51:05 policy-csit | ------------------------------------------------------------------------------ 11:51:30 policy-csit | ValidatesVehiclePolicy | PASS | 11:51:30 policy-csit | ------------------------------------------------------------------------------ 11:51:56 policy-csit | ValidatesAbacPolicy | PASS | 11:51:56 policy-csit | ------------------------------------------------------------------------------ 11:51:56 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 11:51:56 policy-csit | 5 tests, 5 passed, 0 failed 11:51:56 policy-csit | ============================================================================== 11:51:56 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 11:51:56 policy-csit | ============================================================================== 11:52:56 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 11:52:56 policy-csit | ------------------------------------------------------------------------------ 11:52:56 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 11:52:56 policy-csit | ------------------------------------------------------------------------------ 11:52:56 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 11:52:56 policy-csit | ------------------------------------------------------------------------------ 11:52:56 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 11:52:56 policy-csit | ------------------------------------------------------------------------------ 11:52:56 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 11:52:56 policy-csit | ------------------------------------------------------------------------------ 11:52:56 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 11:52:56 policy-csit | 5 tests, 5 passed, 0 failed 11:52:56 policy-csit | ============================================================================== 11:52:56 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 11:52:56 policy-csit | 10 tests, 10 passed, 0 failed 11:52:56 policy-csit | ============================================================================== 11:52:56 policy-csit | Output: /tmp/results/output.xml 11:52:56 policy-csit | Log: /tmp/results/log.html 11:52:56 policy-csit | Report: /tmp/results/report.html 11:52:56 policy-csit | RESULT: 0 11:52:56 policy-csit exited with code 0 11:52:56 IMAGE NAMES STATUS 11:52:56 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 11:52:56 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 11:52:56 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 11:52:56 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 11:52:56 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 11:52:56 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 11:52:56 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 11:52:56 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 11:52:56 Shut down started! 11:52:58 Collecting logs from docker compose containers... 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876041381Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T11:46:52Z 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876433068Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876481518Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876505899Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.87658136Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876621051Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876681842Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876706662Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876769163Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876798584Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876881835Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.876945116Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877054448Z level=info msg=Target target=[all] 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877101708Z level=info msg="Path Home" path=/usr/share/grafana 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.87716604Z level=info msg="Path Data" path=/var/lib/grafana 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877249781Z level=info msg="Path Logs" path=/var/log/grafana 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877368943Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877453644Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 11:52:58 grafana | logger=settings t=2025-06-16T11:46:52.877582626Z level=info msg="App mode production" 11:52:58 grafana | logger=featuremgmt t=2025-06-16T11:46:52.878019494Z level=info msg=FeatureToggles alertRuleRestore=true alertingRuleVersionHistoryRestore=true prometheusAzureOverrideAudience=true logsPanelControls=true alertingNotificationsStepMode=true influxdbBackendMigration=true correlations=true dashgpt=true newDashboardSharingComponent=true promQLScope=true transformationsRedesign=true alertingInsights=true pluginsDetailsRightPanel=true useSessionStorageForRedirection=true alertingRuleRecoverDeleted=true dataplaneFrontendFallback=true logRowsPopoverMenu=true azureMonitorEnableUserAuth=true recoveryThreshold=true cloudWatchRoundUpEndTime=true dashboardSceneSolo=true unifiedRequestLog=true grafanaconThemes=true lokiStructuredMetadata=true addFieldFromCalculationStatFunctions=true logsContextDatasourceUi=true pinNavItems=true kubernetesClientDashboardsFolders=true nestedFolders=true tlsMemcached=true lokiQueryHints=true panelMonitoring=true cloudWatchNewLabelParsing=true alertingApiServer=true alertingRulePermanentlyDelete=true cloudWatchCrossAccountQuerying=true alertingSimplifiedRouting=true newFiltersUI=true onPremToCloudMigrations=true dashboardScene=true dashboardSceneForViewers=true ssoSettingsApi=true newPDFRendering=true formatString=true kubernetesPlaylists=true ssoSettingsSAML=true angularDeprecationUI=true logsInfiniteScrolling=true lokiLabelNamesQueryApi=true awsAsyncQueryCaching=true reportingUseRawTimeRange=true recordedQueriesMulti=true groupToNestedTableTransformation=true unifiedStorageSearchPermissionFiltering=true alertingUIOptimizeReducer=true prometheusUsesCombobox=true publicDashboardsScene=true logsExploreTableVisualisation=true externalCorePlugins=true failWrongDSUID=true preinstallAutoUpdate=true alertingQueryAndExpressionsStepMode=true lokiQuerySplitting=true annotationPermissionUpdate=true azureMonitorPrometheusExemplars=true 11:52:58 grafana | logger=sqlstore t=2025-06-16T11:46:52.878198927Z level=info msg="Connecting to DB" dbtype=sqlite3 11:52:58 grafana | logger=sqlstore t=2025-06-16T11:46:52.878286869Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.879863115Z level=info msg="Locking database" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.879914645Z level=info msg="Starting DB migrations" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.880609377Z level=info msg="Executing migration" id="create migration_log table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.881648825Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.038878ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.916257082Z level=info msg="Executing migration" id="create user table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.917808758Z level=info msg="Migration successfully executed" id="create user table" duration=1.550186ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.923700276Z level=info msg="Executing migration" id="add unique index user.login" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.924577621Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=877.215µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.927778934Z level=info msg="Executing migration" id="add unique index user.email" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.9286779Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=898.546µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.932198918Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.933015872Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=816.574µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.939123883Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.940005719Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=881.446µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.943484017Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.946006789Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.520462ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.949021839Z level=info msg="Executing migration" id="create user table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.949970765Z level=info msg="Migration successfully executed" id="create user table v2" duration=947.965µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.955002319Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.955771431Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=768.692µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.960326308Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.96107072Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=744.162µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.965712518Z level=info msg="Executing migration" id="copy data_source v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.966203716Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=490.788µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.970121651Z level=info msg="Executing migration" id="Drop old table user_v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.970872744Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=750.273µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.97546462Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.976600429Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.134759ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.979910204Z level=info msg="Executing migration" id="Update user table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.980060857Z level=info msg="Migration successfully executed" id="Update user table charset" duration=151.073µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.984059693Z level=info msg="Executing migration" id="Add last_seen_at column to user" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.985222483Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.16493ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.988370526Z level=info msg="Executing migration" id="Add missing user data" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.988732862Z level=info msg="Migration successfully executed" id="Add missing user data" duration=361.986µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.993607053Z level=info msg="Executing migration" id="Add is_disabled column to user" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:52.994890294Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.282561ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.000255684Z level=info msg="Executing migration" id="Add index user.login/user.email" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.00121316Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=957.076µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.0041968Z level=info msg="Executing migration" id="Add is_service_account column to user" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.005070214Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=873.004µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.037135709Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.049728729Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.58961ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.052889812Z level=info msg="Executing migration" id="Add uid column to user" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.054190723Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.300601ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.058840361Z level=info msg="Executing migration" id="Update uid column values for users" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.059145576Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=304.585µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.063686272Z level=info msg="Executing migration" id="Add unique index user_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.064587417Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=899.825µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.068083815Z level=info msg="Executing migration" id="Add is_provisioned column to user" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.069958077Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.873382ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.075842675Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.076387974Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=551.489µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.07975561Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.08037764Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=621.73µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.083580214Z level=info msg="Executing migration" id="update login and email fields to lowercase" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.084034681Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=454.077µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.089505722Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.089867278Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=359.216µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.093220794Z level=info msg="Executing migration" id="create temp user table v1-7" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.09414653Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=925.376µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.09716428Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.097926113Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=761.603µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.101113586Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.101915499Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=802.163µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.107624924Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.108367997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=743.733µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.111335557Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.112013888Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=677.481µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.115949733Z level=info msg="Executing migration" id="Update temp_user table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.115977774Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=28.461µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.12169352Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.122391481Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=697.821µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.125424212Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.126106673Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=681.761µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.129154324Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.129819885Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=665.311µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.162846956Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.163976355Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.130949ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.167590245Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.173074567Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.482882ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.176449433Z level=info msg="Executing migration" id="create temp_user v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.177311877Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=862.174µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.181719591Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.182500744Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=780.813µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.185601075Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.186712064Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.110479ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.189985479Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.190743421Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=761.112µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.19603181Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.196802612Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=770.532µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.200126178Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.200509885Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=383.607µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.203462414Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.204037993Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=575.049µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.206734458Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.207117335Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=382.687µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.212635377Z level=info msg="Executing migration" id="create star table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.213825896Z level=info msg="Migration successfully executed" id="create star table" duration=1.189539ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.217543909Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.218380733Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=836.984µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.220979096Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.22242499Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.445364ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.227755968Z level=info msg="Executing migration" id="Add column org_id in star" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.229754122Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.995634ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.232986656Z level=info msg="Executing migration" id="Add column updated in star" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.235291124Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.304408ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.238507298Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.239339992Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=832.224µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.243264818Z level=info msg="Executing migration" id="create org table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.24400232Z level=info msg="Migration successfully executed" id="create org table v1" duration=734.592µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.249205647Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.25001757Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=811.593µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.253782843Z level=info msg="Executing migration" id="create org_user table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.254897181Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.114188ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.258104905Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.259312796Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.20406ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.294607924Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.296807791Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=2.202057ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.301715123Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.303573783Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.86029ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.307130153Z level=info msg="Executing migration" id="Update org table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.307263175Z level=info msg="Migration successfully executed" id="Update org table charset" duration=133.332µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.309627515Z level=info msg="Executing migration" id="Update org_user table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.309758777Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=131.522µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.312984721Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.313345407Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=355.385µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.318035175Z level=info msg="Executing migration" id="create dashboard table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.319385207Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.349732ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.323887253Z level=info msg="Executing migration" id="add index dashboard.account_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.324843338Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=956.035µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.328077462Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.328893265Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=815.093µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.332409065Z level=info msg="Executing migration" id="create dashboard_tag table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.333446272Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.038117ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.336641675Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.337729714Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.087909ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.342602204Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.343447549Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=845.115µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.346609202Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.352256936Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.647554ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.356530217Z level=info msg="Executing migration" id="create dashboard v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.357353011Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=825.114µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.362452076Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.364277356Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.823691ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.367810755Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.369311351Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.499926ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.37287401Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.373278996Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=403.896µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.378202799Z level=info msg="Executing migration" id="drop table dashboard_v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.379061703Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=858.314µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.383858883Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.383909924Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=53.69µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.38907919Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.391081774Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.001894ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.422544568Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.426507514Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.963646ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.429602876Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.431412236Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.80839ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.434308475Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.435050177Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=741.272µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.439919398Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.441889071Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.968573ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.445632874Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.446405726Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=772.172µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.452618601Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.454598393Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.978272ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.459005306Z level=info msg="Executing migration" id="Update dashboard table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.459048977Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=44.841µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.461452697Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.461481418Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.091µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.465085938Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.466620444Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.533476ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.475785696Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.47898359Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.195214ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.484522862Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.48740051Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.876558ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.490555273Z level=info msg="Executing migration" id="Add column uid in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.492332083Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.77631ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.495125749Z level=info msg="Executing migration" id="Update uid column values in dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.495311762Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=188.633µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.501345013Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.502206477Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=861.064µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.506260525Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.507371483Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.110418ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.510819081Z level=info msg="Executing migration" id="Update dashboard title length" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.510846191Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.5µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.515779814Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.51672795Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=947.716µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.519868792Z level=info msg="Executing migration" id="create dashboard_provisioning" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.52094271Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.073208ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.552655408Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.560923746Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.272308ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.566064622Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.566721343Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=656.331µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.5695581Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.570229482Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=670.252µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.574272089Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.57494245Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=670.241µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.581333018Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.581907127Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=576.069µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.584998589Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.585776191Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=773.932µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.588742921Z level=info msg="Executing migration" id="Add check_sum column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.591042399Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.298628ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.593955028Z level=info msg="Executing migration" id="Add index for dashboard_title" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.594800102Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=844.704µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.600630949Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.600896844Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=265.495µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.605025782Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.605440899Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=413.857µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.609062659Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.610187479Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.125809ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.614929538Z level=info msg="Executing migration" id="Add isPublic for dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.617130664Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.200306ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.620691944Z level=info msg="Executing migration" id="Add deleted for dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.624275294Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.58283ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.627724681Z level=info msg="Executing migration" id="Add index for deleted" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.628564935Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=840.114µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.63364956Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.63666203Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.01168ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.640130779Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.642506857Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.375269ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.645502207Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.646044417Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=541.55µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.65702681Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.660882094Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.854354ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.666074041Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.666962866Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=888.455µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.670353942Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.670870442Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=515.669µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.674447671Z level=info msg="Executing migration" id="create data_source table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.676010996Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.563345ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.680818857Z level=info msg="Executing migration" id="add index data_source.account_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.682295562Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.478275ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.686289348Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.687481057Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.191499ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.691083068Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.691881111Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=797.743µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.696252995Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.697013457Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=756.912µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.700228801Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.70680787Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.578559ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.711345026Z level=info msg="Executing migration" id="create data_source table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.712154759Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=809.513µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.717769573Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.719101846Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.332793ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.724044058Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.724634188Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=591.07µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.727553457Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.727962053Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=408.366µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.735942827Z level=info msg="Executing migration" id="Add column with_credentials" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.738103123Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.161716ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.74335541Z level=info msg="Executing migration" id="Add secure json data column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.746193647Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.837457ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.74932406Z level=info msg="Executing migration" id="Update data_source table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.749362151Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=38.691µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.78110967Z level=info msg="Executing migration" id="Update initial version to 1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.781440325Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=330.795µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.786069253Z level=info msg="Executing migration" id="Add read_only data column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.790221752Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.151359ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.793174891Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.793412455Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=237.014µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.796313183Z level=info msg="Executing migration" id="Update json_data with nulls" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.796577318Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=260.214µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.804742774Z level=info msg="Executing migration" id="Add uid column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.809024596Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.283402ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.81230807Z level=info msg="Executing migration" id="Update uid value" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.812554964Z level=info msg="Migration successfully executed" id="Update uid value" duration=246.724µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.81528821Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.816121274Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=832.624µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.820479247Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.82130415Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=828.093µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.825659374Z level=info msg="Executing migration" id="Add is_prunable column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.828188636Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.532813ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.830957532Z level=info msg="Executing migration" id="Add api_version column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.833515294Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.557002ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.838748101Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.838766061Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=18.44µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.840833976Z level=info msg="Executing migration" id="create api_key table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.84166442Z level=info msg="Migration successfully executed" id="create api_key table" duration=829.954µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.8446284Z level=info msg="Executing migration" id="add index api_key.account_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.845568665Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=939.905µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.851195479Z level=info msg="Executing migration" id="add index api_key.key" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.852794965Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.600936ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.860020396Z level=info msg="Executing migration" id="add index api_key.account_id_name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.860901961Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=883.435µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.865678141Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.866509574Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=830.353µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.893745239Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.89444842Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=703.661µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.89857444Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.899456054Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=881.464µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.903769156Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.908864141Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.094895ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.911472854Z level=info msg="Executing migration" id="create api_key table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.912006744Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=531.2µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.913996416Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.914558756Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=562.25µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.920217101Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.920828271Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=612.89µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.923651738Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.924256518Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=604.53µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.927025564Z level=info msg="Executing migration" id="copy api_key v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.927294928Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=268.944µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.932411294Z level=info msg="Executing migration" id="Drop old table api_key_v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.933868398Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.451894ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.937208774Z level=info msg="Executing migration" id="Update api_key table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.937287526Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=78.822µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.940448508Z level=info msg="Executing migration" id="Add expires to api_key table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.942431621Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.982993ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.946454708Z level=info msg="Executing migration" id="Add service account foreign key" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.9483081Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.852982ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.952835515Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.953023608Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=187.994µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.955885575Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.957865929Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.980014ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.961100262Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.963016395Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.915823ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.96694407Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.967705563Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=761.223µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.971004928Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.971450266Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=445.217µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.976051102Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.976765724Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=714.462µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.980806431Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.981628474Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=821.853µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.98489433Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.985824355Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=925.525µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.989012728Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:53.989936753Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=921.585µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.0059426Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.006027322Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=67.731µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.009490909Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.009592641Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=101.302µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.012855396Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.015787085Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.931019ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.020709827Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.023553384Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.842848ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.028428856Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.028530277Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=102.411µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.031844032Z level=info msg="Executing migration" id="create quota table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.032669896Z level=info msg="Migration successfully executed" id="create quota table v1" duration=822.784µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.036183715Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.037745841Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.562306ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.043196211Z level=info msg="Executing migration" id="Update quota table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.043321614Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=126.393µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.046035879Z level=info msg="Executing migration" id="create plugin_setting table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.046974375Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=935.726µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.050625786Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.05151829Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=892.224µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.056946051Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.060237287Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.290476ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.065940022Z level=info msg="Executing migration" id="Update plugin_setting table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.066060574Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=120.812µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.069213406Z level=info msg="Executing migration" id="update NULL org_id to 1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.069684953Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=467.987µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.074035085Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.085895474Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=11.852239ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.093057533Z level=info msg="Executing migration" id="create session table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.093924708Z level=info msg="Migration successfully executed" id="create session table" duration=866.925µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.097181162Z level=info msg="Executing migration" id="Drop old table playlist table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.097325785Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=145.683µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.100558829Z level=info msg="Executing migration" id="Drop old table playlist_item table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.100682261Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=121.422µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.135329188Z level=info msg="Executing migration" id="create playlist table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.136133772Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=807.114µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.138836257Z level=info msg="Executing migration" id="create playlist item table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.139416526Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=579.869µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.141941589Z level=info msg="Executing migration" id="Update playlist table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.141963359Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.731µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.144651904Z level=info msg="Executing migration" id="Update playlist_item table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.144673214Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.09µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.149397532Z level=info msg="Executing migration" id="Add playlist column created_at" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.151801893Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.404131ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.154315055Z level=info msg="Executing migration" id="Add playlist column updated_at" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.156639684Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.325999ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.15944938Z level=info msg="Executing migration" id="drop preferences table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.159550132Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=100.752µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.164668088Z level=info msg="Executing migration" id="drop preferences table v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.164764169Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=96.431µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.168036674Z level=info msg="Executing migration" id="create preferences table v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.168793436Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=758.222µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.171511331Z level=info msg="Executing migration" id="Update preferences table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.171531542Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=21µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.175539569Z level=info msg="Executing migration" id="Add column team_id in preferences" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.178088771Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.549052ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.180746136Z level=info msg="Executing migration" id="Update team_id column values in preferences" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.180947719Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=201.113µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.184562599Z level=info msg="Executing migration" id="Add column week_start in preferences" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.18702923Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.466311ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.189819417Z level=info msg="Executing migration" id="Add column preferences.json_data" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.192095135Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.275208ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.196417017Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.196431207Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=14.72µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.199063401Z level=info msg="Executing migration" id="Add preferences index org_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.199713692Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=650.171µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.203382043Z level=info msg="Executing migration" id="Add preferences index user_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.204087855Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=705.612µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.209267841Z level=info msg="Executing migration" id="create alert table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.210187016Z level=info msg="Migration successfully executed" id="create alert table v1" duration=920.055µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.214156762Z level=info msg="Executing migration" id="add index alert org_id & id " 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.214907345Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=752.223µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.218017697Z level=info msg="Executing migration" id="add index alert state" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.218863761Z level=info msg="Migration successfully executed" id="add index alert state" duration=847.294µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.222082744Z level=info msg="Executing migration" id="add index alert dashboard_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.223195824Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.11311ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.226144003Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.226934956Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=790.733µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.241043761Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.242105548Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.063247ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.245519076Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.24639255Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=874.084µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.250407806Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.261890139Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.479683ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.268384347Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.269088009Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=704.832µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.275122369Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.276138927Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.017308ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.281582847Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.281905202Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=322.485µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.284919033Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.285539233Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=619.49µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.290391274Z level=info msg="Executing migration" id="create alert_notification table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.291170177Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=778.553µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.295058731Z level=info msg="Executing migration" id="Add column is_default" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.299503116Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.442265ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.303240668Z level=info msg="Executing migration" id="Add column frequency" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.307658912Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.420384ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.314311053Z level=info msg="Executing migration" id="Add column send_reminder" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.317259441Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.948889ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.320271782Z level=info msg="Executing migration" id="Add column disable_resolve_message" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.324240018Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.965887ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.328407917Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.329402855Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=995.058µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.360685036Z level=info msg="Executing migration" id="Update alert table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.360733247Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=51.311µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.364356987Z level=info msg="Executing migration" id="Update alert_notification table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.364397198Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=41.571µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.367991577Z level=info msg="Executing migration" id="create notification_journal table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.369196028Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.199901ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.374775021Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.376372807Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.601186ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.379959728Z level=info msg="Executing migration" id="drop alert_notification_journal" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.381272109Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.315301ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.386559767Z level=info msg="Executing migration" id="create alert_notification_state table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.387649345Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.089208ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.390801918Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.391737424Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=935.166µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.395879372Z level=info msg="Executing migration" id="Add for to alert table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.400010682Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.13094ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.403624292Z level=info msg="Executing migration" id="Add column uid in alert_notification" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.40765619Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.031568ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.412218065Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.412399828Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=180.223µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.416781762Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.417791708Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.009616ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.421127373Z level=info msg="Executing migration" id="Remove unique index org_id_name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.422346044Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.217511ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.42630457Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.431584478Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.279998ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.436109374Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.436125954Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=17.42µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.440591308Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.441673747Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.079229ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.445097924Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.445927047Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=828.743µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.450163658Z level=info msg="Executing migration" id="Drop old annotation table v4" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.45024349Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=79.892µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.453641986Z level=info msg="Executing migration" id="create annotation table v5" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.45451677Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=874.554µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.457874946Z level=info msg="Executing migration" id="add index annotation 0 v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.459198838Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.318782ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.489149489Z level=info msg="Executing migration" id="add index annotation 1 v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.490535351Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.385722ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.494384516Z level=info msg="Executing migration" id="add index annotation 2 v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.49524573Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=860.904µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.499047813Z level=info msg="Executing migration" id="add index annotation 3 v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.500003489Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=954.926µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.504750748Z level=info msg="Executing migration" id="add index annotation 4 v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.505703694Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=952.436µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.509222583Z level=info msg="Executing migration" id="Update annotation table charset" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.509246393Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.513985073Z level=info msg="Executing migration" id="Add column region_id to annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.521182802Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.196399ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.52583082Z level=info msg="Executing migration" id="Drop category_id index" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.526670224Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=839.094µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.529949768Z level=info msg="Executing migration" id="Add column tags to annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.534339092Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.388554ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.539190853Z level=info msg="Executing migration" id="Create annotation_tag table v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.540093158Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=901.615µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.544495171Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.545510578Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.014798ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.548834973Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.549717068Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=881.815µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.553006153Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.563851123Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.845021ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.569283734Z level=info msg="Executing migration" id="Create annotation_tag table v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.569984416Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=715.282µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.573299001Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.574341348Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.041747ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.579925602Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.580311728Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=385.306µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.62840313Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.629449168Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.044838ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.633684098Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.634132275Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=446.987µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.637721416Z level=info msg="Executing migration" id="Add created time to annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.641833474Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.111458ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.646780427Z level=info msg="Executing migration" id="Add updated time to annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.652038444Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.257217ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.655524182Z level=info msg="Executing migration" id="Add index for created in annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.656460728Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=936.106µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.659971167Z level=info msg="Executing migration" id="Add index for updated in annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.661096075Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.124468ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.665377227Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.665674982Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=297.095µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.668920406Z level=info msg="Executing migration" id="Add epoch_end column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.673210347Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.289071ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.67759843Z level=info msg="Executing migration" id="Add index for epoch_end" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.678755289Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.155949ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.682154977Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.682521963Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=366.386µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.687460584Z level=info msg="Executing migration" id="Move region to single row" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.687844072Z level=info msg="Migration successfully executed" id="Move region to single row" duration=382.878µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.691200288Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.69256359Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.362833ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.696238422Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.697686875Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.451583ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.702407274Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.703596574Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.18891ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.707206613Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.708961503Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.75458ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.712843078Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.714237781Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.412004ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.718868069Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.719728453Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=860.004µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.764131204Z level=info msg="Executing migration" id="Increase tags column to length 4096" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.764158244Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=28.69µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.769109457Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.769136557Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=28.32µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.773831125Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.773848855Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=61.291µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.77713153Z level=info msg="Executing migration" id="create test_data table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.777965584Z level=info msg="Migration successfully executed" id="create test_data table" duration=833.665µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.781376501Z level=info msg="Executing migration" id="create dashboard_version table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.782626141Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.24861ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.787439902Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.789050169Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.609747ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.792626799Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.794601972Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.974094ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.799232878Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.799417141Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=181.703µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.802169157Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.802565305Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=394.978µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.806907487Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.806926547Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=19.29µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.809599722Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.814000965Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.400303ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.81854Z level=info msg="Executing migration" id="create team table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.819435136Z level=info msg="Migration successfully executed" id="create team table" duration=894.786µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.824313577Z level=info msg="Executing migration" id="add index team.org_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.825952534Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.638468ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.829827289Z level=info msg="Executing migration" id="add unique index team_org_id_name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.830834726Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.006847ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.834493247Z level=info msg="Executing migration" id="Add column uid in team" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.842017593Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.521496ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.847586035Z level=info msg="Executing migration" id="Update uid column values in team" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.847823139Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=236.994µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.851347218Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.852288603Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=940.095µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.855725541Z level=info msg="Executing migration" id="Add column external_uid in team" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.860254006Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.528055ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.898011936Z level=info msg="Executing migration" id="Add column is_provisioned in team" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.907018496Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=9.00518ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.910504624Z level=info msg="Executing migration" id="create team member table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.911126024Z level=info msg="Migration successfully executed" id="create team member table" duration=620.93µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.914325688Z level=info msg="Executing migration" id="add index team_member.org_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.915107861Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=781.654µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.919471163Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.921091071Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.619168ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.925310051Z level=info msg="Executing migration" id="add index team_member.team_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.9270598Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.75733ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.934525915Z level=info msg="Executing migration" id="Add column email to team table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.942113871Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.591226ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.946150219Z level=info msg="Executing migration" id="Add column external to team_member table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.949604806Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.453547ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.953965049Z level=info msg="Executing migration" id="Add column permission to team_member table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.95883927Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.920782ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.961716978Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.962604443Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=886.995µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.965596773Z level=info msg="Executing migration" id="create dashboard acl table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.966401756Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=804.533µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.970706958Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.972225723Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.518425ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.975531598Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.977287898Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.75537ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.980507111Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.981428077Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=920.615µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.985929122Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.986790696Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=861.244µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.989640263Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.990542209Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=901.426µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.993484738Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:54.994350722Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=865.524µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.005405017Z level=info msg="Executing migration" id="add index dashboard_permission" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.006936742Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.527715ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.010270948Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.010762476Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=491.488µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.015499625Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.015727619Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=226.144µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.017972626Z level=info msg="Executing migration" id="create tag table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.018945963Z level=info msg="Migration successfully executed" id="create tag table" duration=972.177µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.022226697Z level=info msg="Executing migration" id="add index tag.key_value" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.023723451Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.492204ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.029755453Z level=info msg="Executing migration" id="create login attempt table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.030546226Z level=info msg="Migration successfully executed" id="create login attempt table" duration=790.163µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.033915442Z level=info msg="Executing migration" id="add index login_attempt.username" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.035289355Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.372383ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.040130875Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.041326646Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.197881ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.044643971Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.059931205Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.285504ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.064260438Z level=info msg="Executing migration" id="create login_attempt v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.064856697Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=597.14µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.068874775Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.069631877Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=756.942µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.073302218Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.073850107Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=547.499µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.077381427Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.078361083Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=978.996µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.082941959Z level=info msg="Executing migration" id="create user auth table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.083815244Z level=info msg="Migration successfully executed" id="create user auth table" duration=872.655µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.089445867Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.091158376Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.711909ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.09499345Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.09502628Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=33.95µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.1303347Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.136708735Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.374906ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.140116352Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.1453892Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.252628ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.148817877Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.154511422Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.692605ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.159987373Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.165381733Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.39463ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.168722909Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.169743156Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.019738ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.173009551Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.178277758Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.267098ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.182711571Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.188208643Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.496202ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.192382743Z level=info msg="Executing migration" id="create server_lock table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.193178376Z level=info msg="Migration successfully executed" id="create server_lock table" duration=795.163µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.196582323Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.19759001Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.009297ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.202152366Z level=info msg="Executing migration" id="create user auth token table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.203055091Z level=info msg="Migration successfully executed" id="create user auth token table" duration=902.004µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.206660441Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.20780437Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.142249ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.213003627Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.214568123Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.563656ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.219251531Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.220198637Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=946.456µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.223601954Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.232233437Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.600123ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.261913003Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.263830054Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.919722ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.267556527Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.273217891Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.660814ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.277762706Z level=info msg="Executing migration" id="create cache_data table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.278715502Z level=info msg="Migration successfully executed" id="create cache_data table" duration=952.306µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.282146819Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.283123095Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=976.016µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.28754732Z level=info msg="Executing migration" id="create short_url table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.288406974Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=858.874µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.294703688Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.296557029Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.852161ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.300408783Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.300437184Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=29.511µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.306098088Z level=info msg="Executing migration" id="delete alert_definition table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.306258161Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=159.133µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.309274381Z level=info msg="Executing migration" id="recreate alert_definition table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.310338339Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.062178ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.316479452Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.318213271Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.736839ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.321740759Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.322854287Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.113348ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.326274354Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.326291174Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=17.69µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.331360659Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.332394837Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.033768ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.335726103Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.337170216Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.442123ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.342899542Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.344764963Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.863512ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.349529923Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.350543689Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.013087ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.353802223Z level=info msg="Executing migration" id="Add column paused in alert_definition" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.359549069Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.725136ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.389794194Z level=info msg="Executing migration" id="drop alert_definition table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.391823517Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.028463ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.397240968Z level=info msg="Executing migration" id="delete alert_definition_version table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.397502182Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=260.464µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.403954449Z level=info msg="Executing migration" id="recreate alert_definition_version table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.405701579Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.74263ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.409674165Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.411438664Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.763819ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.415210667Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.416240165Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.029008ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.421624464Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.421651895Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=28.871µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.426511945Z level=info msg="Executing migration" id="drop alert_definition_version table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.428031331Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.518296ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.43159523Z level=info msg="Executing migration" id="create alert_instance table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.432628237Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.031657ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.436702555Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.437751642Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.048087ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.441218191Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.442241288Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.022497ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.447122299Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.457634804Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.512035ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.462054978Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.46278286Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=727.552µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.466352119Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.467324576Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=972.087µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.471763249Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.498522746Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.759547ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.514452632Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.54438656Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.933868ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.547802038Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.54853198Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=729.622µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.552599307Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.553277859Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=677.872µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.556777247Z level=info msg="Executing migration" id="add current_reason column related to current_state" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.565789347Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.00936ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.569264695Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.575029321Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.763806ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.579639308Z level=info msg="Executing migration" id="create alert_rule table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.580730616Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.091238ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.584455538Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.585543886Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.087708ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.589208677Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.590258865Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.046118ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.595098155Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.59718652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.086555ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.601084926Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.601108116Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=23.54µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.604562023Z level=info msg="Executing migration" id="add column for to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.611472178Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.910075ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.63733353Z level=info msg="Executing migration" id="add column annotations to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.64818798Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=10.85435ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.653650741Z level=info msg="Executing migration" id="add column labels to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.658267158Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.620627ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.662394627Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.663066899Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=671.692µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.66674819Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.668329626Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.573966ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.673668514Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.681113429Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.445275ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.68419717Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.690269191Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.071241ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.693424585Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.694858508Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.433203ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.699933693Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.705948633Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.0146ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.709225378Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.715283618Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.05725ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.718548733Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.718568754Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=26.681µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.723742919Z level=info msg="Executing migration" id="create alert_rule_version table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.725053482Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.308253ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.728497629Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.730098316Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.599697ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.763427691Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.765704999Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.277908ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.771718069Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.771734939Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=17.81µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.774014427Z level=info msg="Executing migration" id="add column for to alert_rule_version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.780592988Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.574191ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.783627198Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.79097734Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.349082ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.794146523Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.80053548Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.388187ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.805626545Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.811838138Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.210723ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.815327236Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.819819861Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.486815ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.822890183Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.822905963Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=16.52µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.828449925Z level=info msg="Executing migration" id=create_alert_configuration_table 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.829209878Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=759.503µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.834475905Z level=info msg="Executing migration" id="Add column default in alert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.843971083Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.497498ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.847174597Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.847190587Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=16.88µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.852655828Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.859079946Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.423098ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.880305729Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.881977397Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.669368ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.886063585Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.893853225Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.78993ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.897111999Z level=info msg="Executing migration" id=create_ngalert_configuration_table 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.897948114Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=835.955µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.904456942Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.905523839Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.066717ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.908782523Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.915166961Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.383948ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.920726714Z level=info msg="Executing migration" id="create provenance_type table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.921554637Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=827.004µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.924572857Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.925652064Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.078237ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.930298912Z level=info msg="Executing migration" id="create alert_image table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.931112206Z level=info msg="Migration successfully executed" id="create alert_image table" duration=815.564µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.935202844Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.93732873Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=2.123786ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.942625929Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.942654709Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=30.021µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.945946024Z level=info msg="Executing migration" id=create_alert_configuration_history_table 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.946868619Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=922.435µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.9499622Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.950888635Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=926.265µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.956762114Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.95714323Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.961430511Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.962104632Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=676.541µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.965628182Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.967277339Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.648467ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.971680732Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:55.978454525Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.773253ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.006241768Z level=info msg="Executing migration" id="create library_element table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.007859616Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.604857ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.013165194Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.014856802Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.691278ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.018606525Z level=info msg="Executing migration" id="create library_element_connection table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.019462749Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=855.804µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.022799194Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.02378691Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=986.956µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.028054692Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.029043358Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=988.226µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.033151107Z level=info msg="Executing migration" id="increase max description length to 2048" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.033176217Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=26µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.03873039Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.03875855Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=33.97µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.042206978Z level=info msg="Executing migration" id="add library_element folder uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.053256572Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=11.049984ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.057264668Z level=info msg="Executing migration" id="populate library_element folder_uid" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.057548854Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=283.826µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.061232335Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.062316733Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.083938ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.065598067Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.065862901Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=267.624µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.069246768Z level=info msg="Executing migration" id="create data_keys table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.071390293Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.140665ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.079255055Z level=info msg="Executing migration" id="create secrets table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.080163701Z level=info msg="Migration successfully executed" id="create secrets table" duration=908.865µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.083635378Z level=info msg="Executing migration" id="rename data_keys name column to id" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.120115005Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.478647ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.127534929Z level=info msg="Executing migration" id="add name column into data_keys" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.137522775Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.990256ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.142168383Z level=info msg="Executing migration" id="copy data_keys id column values into name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.142427958Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=259.065µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.146119229Z level=info msg="Executing migration" id="rename data_keys name column to label" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.18160487Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.485411ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.184775973Z level=info msg="Executing migration" id="rename data_keys id column back to name" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.212024407Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.247614ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.217001089Z level=info msg="Executing migration" id="create kv_store table v1" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.217733212Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=731.703µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.25003249Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.252153605Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.121905ms 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.255527951Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.255769955Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=241.854µs 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.260095468Z level=info msg="Executing migration" id="create permission table" 11:52:58 grafana | logger=migrator t=2025-06-16T11:46:56.260962172Z level=info msg="Migration successfully executed" id="create permission table" duration=866.274µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.264033393Z level=info msg="Executing migration" id="add unique index permission.role_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.265699311Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.665248ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.27166165Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.273141785Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.479744ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.278220219Z level=info msg="Executing migration" id="create role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.279500691Z level=info msg="Migration successfully executed" id="create role table" duration=1.279582ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.282770796Z level=info msg="Executing migration" id="add column display_name" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.290535425Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.764449ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.294698174Z level=info msg="Executing migration" id="add column group_name" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.304720711Z level=info msg="Migration successfully executed" id="add column group_name" duration=9.994347ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.31002892Z level=info msg="Executing migration" id="add index role.org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.311394072Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.360862ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.315041253Z level=info msg="Executing migration" id="add unique index role_org_id_name" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.315914258Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=872.755µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.321110195Z level=info msg="Executing migration" id="add index role_org_id_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.321994019Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=883.284µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.326501895Z level=info msg="Executing migration" id="create team role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.328126341Z level=info msg="Migration successfully executed" id="create team role table" duration=1.624297ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.334588549Z level=info msg="Executing migration" id="add index team_role.org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.335959302Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.371393ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.339198316Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.340333634Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.134658ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.370734071Z level=info msg="Executing migration" id="add index team_role.team_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.372563601Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.8287ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.376786561Z level=info msg="Executing migration" id="create user role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.378107244Z level=info msg="Migration successfully executed" id="create user role table" duration=1.319963ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.381344068Z level=info msg="Executing migration" id="add index user_role.org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.382400645Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.055907ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.386915131Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.388031939Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.116188ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.39106936Z level=info msg="Executing migration" id="add index user_role.user_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.392149558Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.079948ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.396551662Z level=info msg="Executing migration" id="create builtin role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.397385315Z level=info msg="Migration successfully executed" id="create builtin role table" duration=833.273µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.401398572Z level=info msg="Executing migration" id="add index builtin_role.role_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.40243301Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.034258ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.406947074Z level=info msg="Executing migration" id="add index builtin_role.name" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.408601462Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.654098ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.411833146Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.421650119Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.816863ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.425088297Z level=info msg="Executing migration" id="add index builtin_role.org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.426126424Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.037807ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.430717811Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.431787598Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.069337ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.434809478Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.435853317Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.043288ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.438625193Z level=info msg="Executing migration" id="add unique index role.uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.439635019Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.009186ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.444213806Z level=info msg="Executing migration" id="create seed assignment table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.445006579Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=791.903µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.449929651Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.451181371Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.24859ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.45411303Z level=info msg="Executing migration" id="add column hidden to role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.462347487Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.233467ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.541709089Z level=info msg="Executing migration" id="permission kind migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.551489443Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.780063ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.557661235Z level=info msg="Executing migration" id="permission attribute migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.563551784Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.889888ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.577958983Z level=info msg="Executing migration" id="permission identifier migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.588595811Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.637238ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.593725036Z level=info msg="Executing migration" id="add permission identifier index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.594773473Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.048057ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.599165767Z level=info msg="Executing migration" id="add permission action scope role_id index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.600224234Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.054947ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.607323773Z level=info msg="Executing migration" id="remove permission role_id action scope index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.608494812Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.170899ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.611713466Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.622460375Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=10.746439ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.62994906Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.631669428Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.719898ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.656609484Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.658513806Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.902892ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.664442874Z level=info msg="Executing migration" id="create query_history table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.665342399Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=899.145µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.671507883Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.672759093Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.249621ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.681017001Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.681037461Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=21.12µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.687919996Z level=info msg="Executing migration" id="create query_history_details table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.689232087Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.310311ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.694752659Z level=info msg="Executing migration" id="rbac disabled migrator" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.69479306Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=41.351µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.699069132Z level=info msg="Executing migration" id="teams permissions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.699633921Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=561.169µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.704618275Z level=info msg="Executing migration" id="dashboard permissions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.705635101Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.015297ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.709523835Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.710786387Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.262332ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.714289565Z level=info msg="Executing migration" id="drop managed folder create actions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.714497938Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=208.503µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.718879242Z level=info msg="Executing migration" id="alerting notification permissions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.71935517Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=475.588µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.721986924Z level=info msg="Executing migration" id="create query_history_star table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.723222164Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.23506ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.726842554Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.728519102Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.676708ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.732988507Z level=info msg="Executing migration" id="add column org_id in query_history_star" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.74219216Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.203263ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.746844528Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.746865748Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=22.39µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.77883607Z level=info msg="Executing migration" id="create correlation table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.780895815Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.060595ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.786196003Z level=info msg="Executing migration" id="add index correlations.uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.788151605Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.955342ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.791885348Z level=info msg="Executing migration" id="add index correlations.source_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.79385074Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.964802ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.797939249Z level=info msg="Executing migration" id="add correlation config column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.806749855Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.809896ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.813304324Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.815782556Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.474092ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.821685414Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.823749049Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.063405ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.827584062Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.854538672Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=26.925049ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.861317104Z level=info msg="Executing migration" id="create correlation v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.863387209Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.070845ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.868939061Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.870114012Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.174091ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.875484101Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.877313321Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.82884ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.914640592Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.916042936Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.404584ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.919054506Z level=info msg="Executing migration" id="copy correlation v1 to v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.91929052Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=233.494µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.920976739Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.921645849Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=668.62µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.925382851Z level=info msg="Executing migration" id="add provisioning column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.932712004Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.328513ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.936355405Z level=info msg="Executing migration" id="add type column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.942387525Z level=info msg="Migration successfully executed" id="add type column" duration=6.03161ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.94512166Z level=info msg="Executing migration" id="create entity_events table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.945794712Z level=info msg="Migration successfully executed" id="create entity_events table" duration=672.652µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.950949398Z level=info msg="Executing migration" id="create dashboard public config v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.952910431Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.955992ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.958585435Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.959371768Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.963584448Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.96429211Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.967867379Z level=info msg="Executing migration" id="Drop old dashboard public config table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.968870637Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.004128ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.973549945Z level=info msg="Executing migration" id="recreate dashboard public config v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.97510881Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.558085ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.979012405Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.981293583Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.281438ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.989252976Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.990807972Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.556186ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.996153861Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:56.997561474Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.415143ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.001121574Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.002421206Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.299873ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.033017155Z level=info msg="Executing migration" id="Drop public config table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.034272866Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.258691ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.03992627Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.041326274Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.396994ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.045043765Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.046344167Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.300911ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.050936163Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.052328757Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.393494ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.056307623Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.057624195Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.317432ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.064337587Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.086098179Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.757132ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.09341236Z level=info msg="Executing migration" id="add annotations_enabled column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.103904275Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.488265ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.107816Z level=info msg="Executing migration" id="add time_selection_enabled column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.117843458Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.989777ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.1233736Z level=info msg="Executing migration" id="delete orphaned public dashboards" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.123922589Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=550.419µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.127292945Z level=info msg="Executing migration" id="add share column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.13841556Z level=info msg="Migration successfully executed" id="add share column" duration=11.116715ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.168247007Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.169662921Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=1.416314ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.174868237Z level=info msg="Executing migration" id="create file table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.176516965Z level=info msg="Migration successfully executed" id="create file table" duration=1.647948ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.180095714Z level=info msg="Executing migration" id="file table idx: path natural pk" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.181381176Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.282702ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.184530669Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.185792959Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.26197ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.190715461Z level=info msg="Executing migration" id="create file_meta table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.191684007Z level=info msg="Migration successfully executed" id="create file_meta table" duration=967.946µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.194854751Z level=info msg="Executing migration" id="file table idx: path key" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.196279104Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.424073ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.199421876Z level=info msg="Executing migration" id="set path collation in file table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.199441296Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.27µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.203691998Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.203710678Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=19.61µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.20744217Z level=info msg="Executing migration" id="managed permissions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.208355155Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=912.135µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.211859304Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.212279881Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=419.627µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.21584205Z level=info msg="Executing migration" id="RBAC action name migrator" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.217256504Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.413934ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.222284077Z level=info msg="Executing migration" id="Add UID column to playlist" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.23206535Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.780583ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.235415606Z level=info msg="Executing migration" id="Update uid column values in playlist" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.235596319Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=180.223µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.238907254Z level=info msg="Executing migration" id="Add index for uid in playlist" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.2398184Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=910.796µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.244315114Z level=info msg="Executing migration" id="update group index for alert rules" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.245004725Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=694.781µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.249783955Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.250266993Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=482.208µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.254440113Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.255027742Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=586.779µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.258373698Z level=info msg="Executing migration" id="add action column to seed_assignment" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.268360584Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.984936ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.275152507Z level=info msg="Executing migration" id="add scope column to seed_assignment" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.287468833Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=12.316896ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.291218085Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.292587368Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.369533ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.297091453Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.372969316Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.876653ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.380924089Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.382143089Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.21919ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.385671258Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.3876334Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.961472ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.392803806Z level=info msg="Executing migration" id="add primary key to seed_assigment" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.417099462Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.295256ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.421661067Z level=info msg="Executing migration" id="add origin column to seed_assignment" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.428064054Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.402517ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.432833823Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.433058838Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=227.685µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.43681382Z level=info msg="Executing migration" id="prevent seeding OnCall access" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.437068714Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=255.084µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.440900837Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.441327174Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=426.027µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.445262731Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.445673437Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=410.446µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.450184553Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.450529028Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=344.185µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.453896254Z level=info msg="Executing migration" id="create folder table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.455309107Z level=info msg="Migration successfully executed" id="create folder table" duration=1.408143ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.458764526Z level=info msg="Executing migration" id="Add index for parent_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.459855794Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.090858ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.464376329Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.465444086Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.067277ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.468810892Z level=info msg="Executing migration" id="Update folder title length" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.468859003Z level=info msg="Migration successfully executed" id="Update folder title length" duration=48.381µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.484087917Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.486285944Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.196227ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.491936888Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.493627496Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.689878ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.498830563Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.499955641Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.124318ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.50293514Z level=info msg="Executing migration" id="Sync dashboard and folder table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.503388799Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=452.479µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.509590062Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.510144441Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=554.549µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.51547072Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.517174498Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.703318ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.521125394Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.522303743Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.176289ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.525430646Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.526467473Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.036367ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.531459976Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.532561984Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.101448ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.538105117Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.539435929Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.330052ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.543636739Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.544676556Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.039527ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.548769874Z level=info msg="Executing migration" id="create anon_device table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.549683809Z level=info msg="Migration successfully executed" id="create anon_device table" duration=913.175µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.553010225Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.554104203Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.093458ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.559399401Z level=info msg="Executing migration" id="add index anon_device.updated_at" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.560780964Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.380993ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.564254472Z level=info msg="Executing migration" id="create signing_key table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.566842835Z level=info msg="Migration successfully executed" id="create signing_key table" duration=2.589313ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.571676046Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.572718533Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.041807ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.57796338Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.579157671Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.194071ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.583273909Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.583549303Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=275.814µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.596426608Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.607664925Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.235897ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.612417295Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.613203697Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=787.313µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.616885828Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.616937349Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=55.501µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.622512052Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.624536266Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.024484ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.627942073Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.627959713Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.3µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.633307592Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.634647775Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.338383ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.63856927Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.641135152Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.563972ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.645701688Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.647541689Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.839981ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.656572749Z level=info msg="Executing migration" id="create sso_setting table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.657645218Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.075189ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.661934849Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.663095519Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.16181ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.667641514Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.668208943Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=571.9µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.671691051Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.672331802Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=640.391µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.675546645Z level=info msg="Executing migration" id="create cloud_migration table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.676405469Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=858.634µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.679578013Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.680473948Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=895.555µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.686324325Z level=info msg="Executing migration" id="add stack_id column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.698080171Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.755805ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.709182825Z level=info msg="Executing migration" id="add region_slug column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.721641584Z level=info msg="Migration successfully executed" id="add region_slug column" duration=12.459039ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.727108255Z level=info msg="Executing migration" id="add cluster_slug column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.737735111Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=10.625766ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.743966725Z level=info msg="Executing migration" id="add migration uid column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.753016796Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.047991ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.75747819Z level=info msg="Executing migration" id="Update uid column values for migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.757659383Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=197.314µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.76107923Z level=info msg="Executing migration" id="Add unique index migration_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.762237599Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.157699ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.76646976Z level=info msg="Executing migration" id="add migration run uid column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.775664553Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.195143ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.779058889Z level=info msg="Executing migration" id="Update uid column values for migration run" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.779212412Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=153.403µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.781649502Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.782588818Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=939.096µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.787326747Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.80973326Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=22.406083ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.818295033Z level=info msg="Executing migration" id="create cloud_migration_session v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.818996044Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=700.391µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.823206205Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.824074389Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=867.724µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.828164288Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.828495353Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=330.935µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.831852789Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.832722983Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=870.054µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.837986471Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.865665342Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=27.678751ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.872746479Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.873558733Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=812.044µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.877083112Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.878954423Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.870551ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.882561633Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.883102712Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=540.489µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.888144676Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.888952959Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=807.843µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.89256689Z level=info msg="Executing migration" id="add snapshot upload_url column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.905039017Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.472037ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.908568676Z level=info msg="Executing migration" id="add snapshot status column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.915575513Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.005507ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.958590419Z level=info msg="Executing migration" id="add snapshot local_directory column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.971430553Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=12.841614ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.978206206Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.987052463Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=8.845197ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:57.990848187Z level=info msg="Executing migration" id="add snapshot encryption_key column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.000500807Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.65205ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.005982639Z level=info msg="Executing migration" id="add snapshot error_string column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.014694753Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=8.710254ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.018180981Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.019108367Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=927.136µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.022776948Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.06131037Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=38.535352ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.081605507Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.092108062Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=10.502705ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.095611051Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.105125379Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.513917ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.1100122Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.120689518Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.677658ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.125980486Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.13643371Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=10.493355ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.139862947Z level=info msg="Executing migration" id="increase resource_uid column length" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.139880458Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=17.961µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.142957139Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.142974139Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=17.01µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.145397049Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.155219593Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.819263ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.160093174Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.169223776Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.121771ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.173140991Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.173707451Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=566.33µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.17725823Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.177699187Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=440.387µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.191333224Z level=info msg="Executing migration" id="add record column to alert_rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.204055476Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.722412ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.208596651Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.218247262Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.650161ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.22589585Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.237428492Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=11.532141ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.241220454Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.248982024Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.76044ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.254050598Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.254602557Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=551.709µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.257700639Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.267451801Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.750152ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.271485198Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.28123814Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.751112ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.285905618Z level=info msg="Executing migration" id="delete orphaned service account permissions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.286187513Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=282.175µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.299034337Z level=info msg="Executing migration" id="adding action set permissions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.299856801Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=819.764µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.306091814Z level=info msg="Executing migration" id="create user_external_session table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.307785453Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.693119ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.312576733Z level=info msg="Executing migration" id="increase name_id column length to 1024" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.312603933Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=28.39µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.316157562Z level=info msg="Executing migration" id="increase session_id column length to 1024" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.316188793Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=26.52µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.322086021Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.32265558Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=564.329µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.327188316Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.338591665Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.403159ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.342943937Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.349793412Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=6.849115ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.353084517Z level=info msg="Executing migration" id="add alert_rule_state table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.354055752Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=970.805µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.36047452Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.362460402Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.985182ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.367254772Z level=info msg="Executing migration" id="add guid column to alert_rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.37729577Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.041188ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.381312077Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.388526207Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.21328ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.392900079Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.39292724Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.393140913Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.393157374Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=257.595µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.403171721Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.40376328Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=590.879µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.439298752Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.441061371Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.454661ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.444945766Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.446147436Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.20117ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.449372139Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.450502059Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.12965ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.454777269Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.455877628Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.099779ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.459155763Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.470584483Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.42979ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.473946769Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.480945985Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=7.000256ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.484895251Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.494613962Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.718311ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.497985139Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.5077157Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.729991ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.511378441Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.511551875Z level=info msg="Removed 0 datasources:drilldown permissions" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.511566585Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=188.744µs 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.516759411Z level=info msg="Executing migration" id="remove title in folder unique index" 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.518290197Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.529176ms 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.523454713Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.642875117s 11:52:59 grafana | logger=migrator t=2025-06-16T11:46:58.524367718Z level=info msg="Unlocking database" 11:52:59 grafana | logger=sqlstore t=2025-06-16T11:46:58.543911893Z level=info msg="Created default admin" user=admin 11:52:59 grafana | logger=sqlstore t=2025-06-16T11:46:58.544114366Z level=info msg="Created default organization" 11:52:59 grafana | logger=secrets t=2025-06-16T11:46:58.56230892Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 11:52:59 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T11:46:58.65068236Z level=info msg="Restored cache from database" duration=437.197µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.66021159Z level=info msg="Locking database" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.66023896Z level=info msg="Starting DB migrations" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.667650093Z level=info msg="Executing migration" id="create resource_migration_log table" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.668465047Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=814.494µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.677228413Z level=info msg="Executing migration" id="Initialize resource tables" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.677266594Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=40.781µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.682259836Z level=info msg="Executing migration" id="drop table resource" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.682394849Z level=info msg="Migration successfully executed" id="drop table resource" duration=135.743µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.686620209Z level=info msg="Executing migration" id="create table resource" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.687785718Z level=info msg="Migration successfully executed" id="create table resource" duration=1.165179ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.692963844Z level=info msg="Executing migration" id="create table resource, index: 0" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.694702183Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.736579ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.699403772Z level=info msg="Executing migration" id="drop table resource_history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.699521044Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=117.712µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.702188369Z level=info msg="Executing migration" id="create table resource_history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.703310087Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.121699ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.709311306Z level=info msg="Executing migration" id="create table resource_history, index: 0" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.710614609Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.302873ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.714456222Z level=info msg="Executing migration" id="create table resource_history, index: 1" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.715591211Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.129399ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.72035549Z level=info msg="Executing migration" id="drop table resource_version" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.720475602Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=120.572µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.725498006Z level=info msg="Executing migration" id="create table resource_version" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.726848058Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.349222ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.730991648Z level=info msg="Executing migration" id="create table resource_version, index: 0" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.732147656Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.153838ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.735463611Z level=info msg="Executing migration" id="drop table resource_blob" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.735537863Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=74.122µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.744430232Z level=info msg="Executing migration" id="create table resource_blob" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.746508275Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.079704ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.752146879Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.753491832Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.345873ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.759209767Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.761574397Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.363359ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.794035267Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.807899707Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.86547ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.811112461Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.820146602Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.033511ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.824685337Z level=info msg="Executing migration" id="Add index to resource_history for polling" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.825918078Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.232641ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.831548092Z level=info msg="Executing migration" id="Add index to resource for loading" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.832796442Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.247821ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.836101077Z level=info msg="Executing migration" id="Add column folder in resource_history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.846800005Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.698508ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.851509604Z level=info msg="Executing migration" id="Add column folder in resource" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.863355641Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=11.845517ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.867927837Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 11:52:59 grafana | logger=deletion-marker-migrator t=2025-06-16T11:46:58.867948548Z level=info msg="finding any deletion markers" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.868400295Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=471.528µs 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.872736477Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.875687017Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.95104ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.879665632Z level=info msg="Executing migration" id="Add generation to resource history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.890796808Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.131666ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.905884229Z level=info msg="Executing migration" id="Add generation index to resource history" 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.909128262Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=3.243543ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.915654482Z level=info msg="migrations completed" performed=26 skipped=0 duration=248.05252ms 11:52:59 grafana | logger=resource-migrator t=2025-06-16T11:46:58.916290582Z level=info msg="Unlocking database" 11:52:59 grafana | t=2025-06-16T11:46:58.916563957Z level=info caller=logger.go:214 time=2025-06-16T11:46:58.916534826Z msg="Using channel notifier" logger=sql-resource-server 11:52:59 grafana | logger=plugin.store t=2025-06-16T11:46:58.926686855Z level=info msg="Loading plugins..." 11:52:59 grafana | logger=plugins.registration t=2025-06-16T11:46:58.962744496Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 11:52:59 grafana | logger=plugins.initialization t=2025-06-16T11:46:58.962772326Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 11:52:59 grafana | logger=plugin.store t=2025-06-16T11:46:58.962818947Z level=info msg="Plugins loaded" count=53 duration=36.133082ms 11:52:59 grafana | logger=query_data t=2025-06-16T11:46:58.969166342Z level=info msg="Query Service initialization" 11:52:59 grafana | logger=live.push_http t=2025-06-16T11:46:58.97380212Z level=info msg="Live Push Gateway initialization" 11:52:59 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T11:46:58.992675853Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 11:52:59 grafana | logger=ngalert t=2025-06-16T11:46:59.027030776Z level=info msg="Using simple database alert instance store" 11:52:59 grafana | logger=ngalert.state.manager.persist t=2025-06-16T11:46:59.027084427Z level=info msg="Using sync state persister" 11:52:59 grafana | logger=infra.usagestats.collector t=2025-06-16T11:46:59.031240755Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 11:52:59 grafana | logger=grafanaStorageLogger t=2025-06-16T11:46:59.034224325Z level=info msg="Storage starting" 11:52:59 grafana | logger=ngalert.state.manager t=2025-06-16T11:46:59.034587341Z level=info msg="Warming state cache for startup" 11:52:59 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T11:46:59.037041592Z level=info msg="Starting MultiOrg Alertmanager" 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.03868114Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 11:52:59 grafana | logger=http.server t=2025-06-16T11:46:59.041546797Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 11:52:59 grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.080299202Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 11:52:59 grafana | logger=ngalert.state.manager t=2025-06-16T11:46:59.093344569Z level=info msg="State cache has been initialized" states=0 duration=58.757158ms 11:52:59 grafana | logger=ngalert.scheduler t=2025-06-16T11:46:59.093386449Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 11:52:59 grafana | logger=ticker t=2025-06-16T11:46:59.09344925Z level=info msg=starting first_tick=2025-06-16T11:47:00Z 11:52:59 grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.094491278Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 11:52:59 grafana | logger=plugins.update.checker t=2025-06-16T11:46:59.128704307Z level=info msg="Update check succeeded" duration=92.719252ms 11:52:59 grafana | logger=grafana.update.checker t=2025-06-16T11:46:59.131610726Z level=info msg="Update check succeeded" duration=96.779581ms 11:52:59 grafana | logger=provisioning.datasources t=2025-06-16T11:46:59.164721017Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 11:52:59 grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.186949347Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 11:52:59 grafana | logger=provisioning.alerting t=2025-06-16T11:46:59.194696575Z level=info msg="starting to provision alerting" 11:52:59 grafana | logger=provisioning.alerting t=2025-06-16T11:46:59.194721075Z level=info msg="finished to provision alerting" 11:52:59 grafana | logger=provisioning.dashboard t=2025-06-16T11:46:59.19678011Z level=info msg="starting to provision dashboards" 11:52:59 grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.198409297Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 11:52:59 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T11:46:59.281920617Z level=info msg="Patterns update finished" duration=107.158274ms 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.462066354Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.467556216Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.468288848Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.468924348Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.469683641Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.471035883Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.473048957Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.475809423Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.47682018Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 11:52:59 grafana | logger=app-registry t=2025-06-16T11:46:59.540250185Z level=info msg="app registry initialized" 11:52:59 grafana | logger=plugin.installer t=2025-06-16T11:46:59.54053232Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 11:52:59 grafana | logger=installer.fs t=2025-06-16T11:46:59.705839991Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 11:52:59 grafana | logger=plugins.registration t=2025-06-16T11:46:59.74424593Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.74427724Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=705.53971ms 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.744297551Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 11:52:59 grafana | logger=plugin.installer t=2025-06-16T11:46:59.926167347Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 11:52:59 grafana | logger=installer.fs t=2025-06-16T11:46:59.990023789Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 11:52:59 grafana | logger=plugins.registration t=2025-06-16T11:47:00.006455043Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.006476763Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=262.174602ms 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.006496803Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 11:52:59 grafana | logger=provisioning.dashboard t=2025-06-16T11:47:00.10608256Z level=info msg="finished to provision dashboards" 11:52:59 grafana | logger=plugin.installer t=2025-06-16T11:47:00.19148191Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 11:52:59 grafana | logger=installer.fs t=2025-06-16T11:47:00.257435947Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 11:52:59 grafana | logger=plugins.registration t=2025-06-16T11:47:00.274283797Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.274310298Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=267.809035ms 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.274329918Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 11:52:59 grafana | logger=plugin.installer t=2025-06-16T11:47:00.518243485Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 11:52:59 grafana | logger=installer.fs t=2025-06-16T11:47:00.586894166Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 11:52:59 grafana | logger=plugins.registration t=2025-06-16T11:47:00.60572012Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 11:52:59 grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.6057439Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=331.407242ms 11:52:59 grafana | logger=infra.usagestats t=2025-06-16T11:48:42.046022505Z level=info msg="Usage stats are ready to report" 11:52:59 kafka | ===> User 11:52:59 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:52:59 kafka | ===> Configuring ... 11:52:59 kafka | Running in Zookeeper mode... 11:52:59 kafka | ===> Running preflight checks ... 11:52:59 kafka | ===> Check if /var/lib/kafka/data is writable ... 11:52:59 kafka | ===> Check if Zookeeper is healthy ... 11:52:59 kafka | [2025-06-16 11:46:51,749] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,749] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,750] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,754] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,757] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:52:59 kafka | [2025-06-16 11:46:51,761] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:52:59 kafka | [2025-06-16 11:46:51,767] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:51,785] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:51,786] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:51,792] INFO Socket connection established, initiating session, client: /172.17.0.5:44528, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:51,831] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000273560000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:51,949] INFO Session: 0x100000273560000 closed (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:51,950] INFO EventThread shut down for session: 0x100000273560000 (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | Using log4j config /etc/kafka/log4j.properties 11:52:59 kafka | ===> Launching ... 11:52:59 kafka | ===> Launching kafka ... 11:52:59 kafka | [2025-06-16 11:46:52,569] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 11:52:59 kafka | [2025-06-16 11:46:52,862] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:52:59 kafka | [2025-06-16 11:46:52,937] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 11:52:59 kafka | [2025-06-16 11:46:52,938] INFO starting (kafka.server.KafkaServer) 11:52:59 kafka | [2025-06-16 11:46:52,939] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 11:52:59 kafka | [2025-06-16 11:46:52,956] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 11:52:59 kafka | [2025-06-16 11:46:52,961] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,961] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,961] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,961] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,962] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,966] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 11:52:59 kafka | [2025-06-16 11:46:52,971] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:52:59 kafka | [2025-06-16 11:46:52,976] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:52,978] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 11:52:59 kafka | [2025-06-16 11:46:52,981] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:52,987] INFO Socket connection established, initiating session, client: /172.17.0.5:44530, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:52,996] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000273560001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 11:52:59 kafka | [2025-06-16 11:46:52,999] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 11:52:59 kafka | [2025-06-16 11:46:53,299] INFO Cluster ID = Y_BS0uSaQHW9oN2tPXU35A (kafka.server.KafkaServer) 11:52:59 kafka | [2025-06-16 11:46:53,302] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 11:52:59 kafka | [2025-06-16 11:46:53,353] INFO KafkaConfig values: 11:52:59 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 11:52:59 kafka | alter.config.policy.class.name = null 11:52:59 kafka | alter.log.dirs.replication.quota.window.num = 11 11:52:59 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 11:52:59 kafka | authorizer.class.name = 11:52:59 kafka | auto.create.topics.enable = true 11:52:59 kafka | auto.include.jmx.reporter = true 11:52:59 kafka | auto.leader.rebalance.enable = true 11:52:59 kafka | background.threads = 10 11:52:59 kafka | broker.heartbeat.interval.ms = 2000 11:52:59 kafka | broker.id = 1 11:52:59 kafka | broker.id.generation.enable = true 11:52:59 kafka | broker.rack = null 11:52:59 kafka | broker.session.timeout.ms = 9000 11:52:59 kafka | client.quota.callback.class = null 11:52:59 kafka | compression.type = producer 11:52:59 kafka | connection.failed.authentication.delay.ms = 100 11:52:59 kafka | connections.max.idle.ms = 600000 11:52:59 kafka | connections.max.reauth.ms = 0 11:52:59 kafka | control.plane.listener.name = null 11:52:59 kafka | controlled.shutdown.enable = true 11:52:59 kafka | controlled.shutdown.max.retries = 3 11:52:59 kafka | controlled.shutdown.retry.backoff.ms = 5000 11:52:59 kafka | controller.listener.names = null 11:52:59 kafka | controller.quorum.append.linger.ms = 25 11:52:59 kafka | controller.quorum.election.backoff.max.ms = 1000 11:52:59 kafka | controller.quorum.election.timeout.ms = 1000 11:52:59 kafka | controller.quorum.fetch.timeout.ms = 2000 11:52:59 kafka | controller.quorum.request.timeout.ms = 2000 11:52:59 kafka | controller.quorum.retry.backoff.ms = 20 11:52:59 kafka | controller.quorum.voters = [] 11:52:59 kafka | controller.quota.window.num = 11 11:52:59 kafka | controller.quota.window.size.seconds = 1 11:52:59 kafka | controller.socket.timeout.ms = 30000 11:52:59 kafka | create.topic.policy.class.name = null 11:52:59 kafka | default.replication.factor = 1 11:52:59 kafka | delegation.token.expiry.check.interval.ms = 3600000 11:52:59 kafka | delegation.token.expiry.time.ms = 86400000 11:52:59 kafka | delegation.token.master.key = null 11:52:59 kafka | delegation.token.max.lifetime.ms = 604800000 11:52:59 kafka | delegation.token.secret.key = null 11:52:59 kafka | delete.records.purgatory.purge.interval.requests = 1 11:52:59 kafka | delete.topic.enable = true 11:52:59 kafka | early.start.listeners = null 11:52:59 kafka | fetch.max.bytes = 57671680 11:52:59 kafka | fetch.purgatory.purge.interval.requests = 1000 11:52:59 kafka | group.initial.rebalance.delay.ms = 3000 11:52:59 kafka | group.max.session.timeout.ms = 1800000 11:52:59 kafka | group.max.size = 2147483647 11:52:59 kafka | group.min.session.timeout.ms = 6000 11:52:59 kafka | initial.broker.registration.timeout.ms = 60000 11:52:59 kafka | inter.broker.listener.name = PLAINTEXT 11:52:59 kafka | inter.broker.protocol.version = 3.4-IV0 11:52:59 kafka | kafka.metrics.polling.interval.secs = 10 11:52:59 kafka | kafka.metrics.reporters = [] 11:52:59 kafka | leader.imbalance.check.interval.seconds = 300 11:52:59 kafka | leader.imbalance.per.broker.percentage = 10 11:52:59 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 11:52:59 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 11:52:59 kafka | log.cleaner.backoff.ms = 15000 11:52:59 kafka | log.cleaner.dedupe.buffer.size = 134217728 11:52:59 kafka | log.cleaner.delete.retention.ms = 86400000 11:52:59 kafka | log.cleaner.enable = true 11:52:59 kafka | log.cleaner.io.buffer.load.factor = 0.9 11:52:59 kafka | log.cleaner.io.buffer.size = 524288 11:52:59 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:52:59 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:52:59 kafka | log.cleaner.min.cleanable.ratio = 0.5 11:52:59 kafka | log.cleaner.min.compaction.lag.ms = 0 11:52:59 kafka | log.cleaner.threads = 1 11:52:59 kafka | log.cleanup.policy = [delete] 11:52:59 kafka | log.dir = /tmp/kafka-logs 11:52:59 kafka | log.dirs = /var/lib/kafka/data 11:52:59 kafka | log.flush.interval.messages = 9223372036854775807 11:52:59 kafka | log.flush.interval.ms = null 11:52:59 kafka | log.flush.offset.checkpoint.interval.ms = 60000 11:52:59 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 11:52:59 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 11:52:59 kafka | log.index.interval.bytes = 4096 11:52:59 kafka | log.index.size.max.bytes = 10485760 11:52:59 kafka | log.message.downconversion.enable = true 11:52:59 kafka | log.message.format.version = 3.0-IV1 11:52:59 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 11:52:59 kafka | log.message.timestamp.type = CreateTime 11:52:59 kafka | log.preallocate = false 11:52:59 kafka | log.retention.bytes = -1 11:52:59 kafka | log.retention.check.interval.ms = 300000 11:52:59 kafka | log.retention.hours = 168 11:52:59 kafka | log.retention.minutes = null 11:52:59 kafka | log.retention.ms = null 11:52:59 kafka | log.roll.hours = 168 11:52:59 kafka | log.roll.jitter.hours = 0 11:52:59 kafka | log.roll.jitter.ms = null 11:52:59 kafka | log.roll.ms = null 11:52:59 kafka | log.segment.bytes = 1073741824 11:52:59 kafka | log.segment.delete.delay.ms = 60000 11:52:59 kafka | max.connection.creation.rate = 2147483647 11:52:59 kafka | max.connections = 2147483647 11:52:59 kafka | max.connections.per.ip = 2147483647 11:52:59 kafka | max.connections.per.ip.overrides = 11:52:59 kafka | max.incremental.fetch.session.cache.slots = 1000 11:52:59 kafka | message.max.bytes = 1048588 11:52:59 kafka | metadata.log.dir = null 11:52:59 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 11:52:59 kafka | metadata.log.max.snapshot.interval.ms = 3600000 11:52:59 kafka | metadata.log.segment.bytes = 1073741824 11:52:59 kafka | metadata.log.segment.min.bytes = 8388608 11:52:59 kafka | metadata.log.segment.ms = 604800000 11:52:59 kafka | metadata.max.idle.interval.ms = 500 11:52:59 kafka | metadata.max.retention.bytes = 104857600 11:52:59 kafka | metadata.max.retention.ms = 604800000 11:52:59 kafka | metric.reporters = [] 11:52:59 kafka | metrics.num.samples = 2 11:52:59 kafka | metrics.recording.level = INFO 11:52:59 kafka | metrics.sample.window.ms = 30000 11:52:59 kafka | min.insync.replicas = 1 11:52:59 kafka | node.id = 1 11:52:59 kafka | num.io.threads = 8 11:52:59 kafka | num.network.threads = 3 11:52:59 kafka | num.partitions = 1 11:52:59 kafka | num.recovery.threads.per.data.dir = 1 11:52:59 kafka | num.replica.alter.log.dirs.threads = null 11:52:59 kafka | num.replica.fetchers = 1 11:52:59 kafka | offset.metadata.max.bytes = 4096 11:52:59 kafka | offsets.commit.required.acks = -1 11:52:59 kafka | offsets.commit.timeout.ms = 5000 11:52:59 kafka | offsets.load.buffer.size = 5242880 11:52:59 kafka | offsets.retention.check.interval.ms = 600000 11:52:59 kafka | offsets.retention.minutes = 10080 11:52:59 kafka | offsets.topic.compression.codec = 0 11:52:59 kafka | offsets.topic.num.partitions = 50 11:52:59 kafka | offsets.topic.replication.factor = 1 11:52:59 kafka | offsets.topic.segment.bytes = 104857600 11:52:59 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:52:59 kafka | password.encoder.iterations = 4096 11:52:59 kafka | password.encoder.key.length = 128 11:52:59 kafka | password.encoder.keyfactory.algorithm = null 11:52:59 kafka | password.encoder.old.secret = null 11:52:59 kafka | password.encoder.secret = null 11:52:59 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:52:59 kafka | process.roles = [] 11:52:59 kafka | producer.id.expiration.check.interval.ms = 600000 11:52:59 kafka | producer.id.expiration.ms = 86400000 11:52:59 kafka | producer.purgatory.purge.interval.requests = 1000 11:52:59 kafka | queued.max.request.bytes = -1 11:52:59 kafka | queued.max.requests = 500 11:52:59 kafka | quota.window.num = 11 11:52:59 kafka | quota.window.size.seconds = 1 11:52:59 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 11:52:59 kafka | remote.log.manager.task.interval.ms = 30000 11:52:59 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 11:52:59 kafka | remote.log.manager.task.retry.backoff.ms = 500 11:52:59 kafka | remote.log.manager.task.retry.jitter = 0.2 11:52:59 kafka | remote.log.manager.thread.pool.size = 10 11:52:59 kafka | remote.log.metadata.manager.class.name = null 11:52:59 kafka | remote.log.metadata.manager.class.path = null 11:52:59 kafka | remote.log.metadata.manager.impl.prefix = null 11:52:59 kafka | remote.log.metadata.manager.listener.name = null 11:52:59 kafka | remote.log.reader.max.pending.tasks = 100 11:52:59 kafka | remote.log.reader.threads = 10 11:52:59 kafka | remote.log.storage.manager.class.name = null 11:52:59 kafka | remote.log.storage.manager.class.path = null 11:52:59 kafka | remote.log.storage.manager.impl.prefix = null 11:52:59 kafka | remote.log.storage.system.enable = false 11:52:59 kafka | replica.fetch.backoff.ms = 1000 11:52:59 kafka | replica.fetch.max.bytes = 1048576 11:52:59 kafka | replica.fetch.min.bytes = 1 11:52:59 kafka | replica.fetch.response.max.bytes = 10485760 11:52:59 kafka | replica.fetch.wait.max.ms = 500 11:52:59 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 11:52:59 kafka | replica.lag.time.max.ms = 30000 11:52:59 kafka | replica.selector.class = null 11:52:59 kafka | replica.socket.receive.buffer.bytes = 65536 11:52:59 kafka | replica.socket.timeout.ms = 30000 11:52:59 kafka | replication.quota.window.num = 11 11:52:59 kafka | replication.quota.window.size.seconds = 1 11:52:59 kafka | request.timeout.ms = 30000 11:52:59 kafka | reserved.broker.max.id = 1000 11:52:59 kafka | sasl.client.callback.handler.class = null 11:52:59 kafka | sasl.enabled.mechanisms = [GSSAPI] 11:52:59 kafka | sasl.jaas.config = null 11:52:59 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:52:59 kafka | sasl.kerberos.min.time.before.relogin = 60000 11:52:59 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:52:59 kafka | sasl.kerberos.service.name = null 11:52:59 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 11:52:59 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 11:52:59 kafka | sasl.login.callback.handler.class = null 11:52:59 kafka | sasl.login.class = null 11:52:59 kafka | sasl.login.connect.timeout.ms = null 11:52:59 kafka | sasl.login.read.timeout.ms = null 11:52:59 kafka | sasl.login.refresh.buffer.seconds = 300 11:52:59 kafka | sasl.login.refresh.min.period.seconds = 60 11:52:59 kafka | sasl.login.refresh.window.factor = 0.8 11:52:59 kafka | sasl.login.refresh.window.jitter = 0.05 11:52:59 kafka | sasl.login.retry.backoff.max.ms = 10000 11:52:59 kafka | sasl.login.retry.backoff.ms = 100 11:52:59 kafka | sasl.mechanism.controller.protocol = GSSAPI 11:52:59 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 11:52:59 kafka | sasl.oauthbearer.clock.skew.seconds = 30 11:52:59 kafka | sasl.oauthbearer.expected.audience = null 11:52:59 kafka | sasl.oauthbearer.expected.issuer = null 11:52:59 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:52:59 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:52:59 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:52:59 kafka | sasl.oauthbearer.jwks.endpoint.url = null 11:52:59 kafka | sasl.oauthbearer.scope.claim.name = scope 11:52:59 kafka | sasl.oauthbearer.sub.claim.name = sub 11:52:59 kafka | sasl.oauthbearer.token.endpoint.url = null 11:52:59 kafka | sasl.server.callback.handler.class = null 11:52:59 kafka | sasl.server.max.receive.size = 524288 11:52:59 kafka | security.inter.broker.protocol = PLAINTEXT 11:52:59 kafka | security.providers = null 11:52:59 kafka | socket.connection.setup.timeout.max.ms = 30000 11:52:59 kafka | socket.connection.setup.timeout.ms = 10000 11:52:59 kafka | socket.listen.backlog.size = 50 11:52:59 kafka | socket.receive.buffer.bytes = 102400 11:52:59 kafka | socket.request.max.bytes = 104857600 11:52:59 kafka | socket.send.buffer.bytes = 102400 11:52:59 kafka | ssl.cipher.suites = [] 11:52:59 kafka | ssl.client.auth = none 11:52:59 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:52:59 kafka | ssl.endpoint.identification.algorithm = https 11:52:59 kafka | ssl.engine.factory.class = null 11:52:59 kafka | ssl.key.password = null 11:52:59 kafka | ssl.keymanager.algorithm = SunX509 11:52:59 kafka | ssl.keystore.certificate.chain = null 11:52:59 kafka | ssl.keystore.key = null 11:52:59 kafka | ssl.keystore.location = null 11:52:59 kafka | ssl.keystore.password = null 11:52:59 kafka | ssl.keystore.type = JKS 11:52:59 kafka | ssl.principal.mapping.rules = DEFAULT 11:52:59 kafka | ssl.protocol = TLSv1.3 11:52:59 kafka | ssl.provider = null 11:52:59 kafka | ssl.secure.random.implementation = null 11:52:59 kafka | ssl.trustmanager.algorithm = PKIX 11:52:59 kafka | ssl.truststore.certificates = null 11:52:59 kafka | ssl.truststore.location = null 11:52:59 kafka | ssl.truststore.password = null 11:52:59 kafka | ssl.truststore.type = JKS 11:52:59 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:52:59 kafka | transaction.max.timeout.ms = 900000 11:52:59 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:52:59 kafka | transaction.state.log.load.buffer.size = 5242880 11:52:59 kafka | transaction.state.log.min.isr = 2 11:52:59 kafka | transaction.state.log.num.partitions = 50 11:52:59 kafka | transaction.state.log.replication.factor = 3 11:52:59 kafka | transaction.state.log.segment.bytes = 104857600 11:52:59 kafka | transactional.id.expiration.ms = 604800000 11:52:59 kafka | unclean.leader.election.enable = false 11:52:59 kafka | zookeeper.clientCnxnSocket = null 11:52:59 kafka | zookeeper.connect = zookeeper:2181 11:52:59 kafka | zookeeper.connection.timeout.ms = null 11:52:59 kafka | zookeeper.max.in.flight.requests = 10 11:52:59 kafka | zookeeper.metadata.migration.enable = false 11:52:59 kafka | zookeeper.session.timeout.ms = 18000 11:52:59 kafka | zookeeper.set.acl = false 11:52:59 kafka | zookeeper.ssl.cipher.suites = null 11:52:59 kafka | zookeeper.ssl.client.enable = false 11:52:59 kafka | zookeeper.ssl.crl.enable = false 11:52:59 kafka | zookeeper.ssl.enabled.protocols = null 11:52:59 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:52:59 kafka | zookeeper.ssl.keystore.location = null 11:52:59 kafka | zookeeper.ssl.keystore.password = null 11:52:59 kafka | zookeeper.ssl.keystore.type = null 11:52:59 kafka | zookeeper.ssl.ocsp.enable = false 11:52:59 kafka | zookeeper.ssl.protocol = TLSv1.2 11:52:59 kafka | zookeeper.ssl.truststore.location = null 11:52:59 kafka | zookeeper.ssl.truststore.password = null 11:52:59 kafka | zookeeper.ssl.truststore.type = null 11:52:59 kafka | (kafka.server.KafkaConfig) 11:52:59 kafka | [2025-06-16 11:46:53,389] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:52:59 kafka | [2025-06-16 11:46:53,392] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:52:59 kafka | [2025-06-16 11:46:53,393] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:52:59 kafka | [2025-06-16 11:46:53,393] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:52:59 kafka | [2025-06-16 11:46:53,440] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:46:53,442] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:46:53,454] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:46:53,454] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:46:53,456] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:46:53,465] INFO Starting the log cleaner (kafka.log.LogCleaner) 11:52:59 kafka | [2025-06-16 11:46:53,507] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 11:52:59 kafka | [2025-06-16 11:46:53,519] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 11:52:59 kafka | [2025-06-16 11:46:53,534] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 11:52:59 kafka | [2025-06-16 11:46:53,577] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 11:52:59 kafka | [2025-06-16 11:46:53,923] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:52:59 kafka | [2025-06-16 11:46:53,928] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 11:52:59 kafka | [2025-06-16 11:46:53,950] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 11:52:59 kafka | [2025-06-16 11:46:53,950] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:52:59 kafka | [2025-06-16 11:46:53,951] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 11:52:59 kafka | [2025-06-16 11:46:53,956] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 11:52:59 kafka | [2025-06-16 11:46:53,967] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 11:52:59 kafka | [2025-06-16 11:46:53,992] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:53,995] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:53,996] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:53,999] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:54,016] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 11:52:59 kafka | [2025-06-16 11:46:54,039] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 11:52:59 kafka | [2025-06-16 11:46:54,061] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750074414053,1750074414053,1,0,0,72057604562878465,258,0,27 11:52:59 kafka | (kafka.zk.KafkaZkClient) 11:52:59 kafka | [2025-06-16 11:46:54,062] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 11:52:59 kafka | [2025-06-16 11:46:54,114] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 11:52:59 kafka | [2025-06-16 11:46:54,126] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:54,133] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:54,133] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:54,144] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 11:52:59 kafka | [2025-06-16 11:46:54,150] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:46:54,154] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,159] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,159] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:46:54,165] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 11:52:59 kafka | [2025-06-16 11:46:54,189] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 11:52:59 kafka | [2025-06-16 11:46:54,196] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 11:52:59 kafka | [2025-06-16 11:46:54,197] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 11:52:59 kafka | [2025-06-16 11:46:54,200] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 11:52:59 kafka | [2025-06-16 11:46:54,200] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,207] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,210] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,215] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,238] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,247] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,249] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:52:59 kafka | [2025-06-16 11:46:54,254] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 11:52:59 kafka | [2025-06-16 11:46:54,265] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 11:52:59 kafka | [2025-06-16 11:46:54,266] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,266] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,267] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,267] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,271] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 11:52:59 kafka | [2025-06-16 11:46:54,271] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,274] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:46:54,279] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,280] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,290] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,291] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,291] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,291] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,293] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 11:52:59 kafka | [2025-06-16 11:46:54,293] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,296] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 11:52:59 kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,300] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,301] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,303] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 11:52:59 kafka | [2025-06-16 11:46:54,311] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:54,316] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 11:52:59 kafka | [2025-06-16 11:46:54,337] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 11:52:59 kafka | [2025-06-16 11:46:54,337] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 11:52:59 kafka | [2025-06-16 11:46:54,337] INFO Kafka startTimeMs: 1750074414330 (org.apache.kafka.common.utils.AppInfoParser) 11:52:59 kafka | [2025-06-16 11:46:54,340] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 11:52:59 kafka | [2025-06-16 11:46:54,374] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:52:59 kafka | [2025-06-16 11:46:54,381] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:46:54,398] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:52:59 kafka | [2025-06-16 11:46:59,312] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:46:59,313] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,417] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:52:59 kafka | [2025-06-16 11:47:27,421] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:52:59 kafka | [2025-06-16 11:47:27,430] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,435] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,451] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(1vUAlylBSMO4USo_S3aOEQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,451] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,453] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,453] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,457] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,457] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,484] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,491] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,492] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,495] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,495] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,499] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,500] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,516] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(qcqke507RcCh6aE31A-Zkw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,516] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:47:27,517] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,518] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,518] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,532] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,592] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,610] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,613] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,613] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,615] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(1vUAlylBSMO4USo_S3aOEQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,633] INFO [Broker id=1] Finished LeaderAndIsr request in 136ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,634] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=1vUAlylBSMO4USo_S3aOEQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,642] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,642] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,642] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,643] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,643] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,643] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,647] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,649] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,653] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,681] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,681] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,687] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 11:52:59 kafka | [2025-06-16 11:47:27,687] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,693] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,694] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,695] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,695] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,696] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,705] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,706] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,706] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,706] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,707] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,713] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,713] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,714] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,714] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,714] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,721] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,722] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,722] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,723] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,723] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,731] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,731] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,732] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,732] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,732] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,738] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,739] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,739] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,739] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,739] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,746] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,747] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,747] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,747] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,748] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,754] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,755] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,755] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,755] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,755] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,762] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,763] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,763] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,763] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,763] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,770] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,771] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,771] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,771] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,771] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,778] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,779] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,780] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,780] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,780] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,786] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,788] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,788] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,788] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,789] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,795] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,796] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,796] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,797] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,797] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,804] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,805] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,805] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,805] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,805] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,812] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,813] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,813] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,813] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,813] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,821] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,821] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,822] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,822] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,822] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,829] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,830] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,830] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,831] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,831] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,837] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,838] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,838] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,838] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,838] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,845] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,845] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,845] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,846] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,846] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,852] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,853] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,853] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,853] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,853] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,860] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,861] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,861] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,861] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,861] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,868] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,869] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,869] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,869] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,870] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,876] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,877] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,877] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,877] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,877] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,884] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,884] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,885] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,885] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,885] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,892] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,892] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,892] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,893] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,893] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,901] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,901] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,902] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,902] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,902] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,909] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,910] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,910] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,910] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,910] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,919] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,920] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,920] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,920] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,920] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,927] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,928] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,928] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,928] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,929] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,939] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,940] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,940] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,941] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,941] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,952] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,954] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,954] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,954] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,954] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,965] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,966] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,967] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,967] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,967] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,980] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,981] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,981] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,982] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,982] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,990] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:27,991] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:27,991] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,991] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:27,991] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:27,999] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,000] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,000] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,000] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,000] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,007] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,008] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,008] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,009] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,009] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,016] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,017] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,017] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,017] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,017] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,025] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,026] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,026] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,026] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,026] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,033] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,035] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,035] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,035] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,035] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,042] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,043] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,043] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,043] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,044] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,050] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,051] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,051] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,051] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,051] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,058] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,059] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,059] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,060] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,060] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,067] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,067] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,068] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,068] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,068] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,075] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,076] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,076] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,076] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,076] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,083] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,084] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,084] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,084] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,084] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,090] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,091] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,091] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,091] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,091] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,098] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,098] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,099] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,099] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,099] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,106] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,107] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,107] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,107] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,107] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,114] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,115] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,115] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,115] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,115] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,119] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:47:28,120] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:47:28,120] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,120] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:47:28,121] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,129] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,130] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,137] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,137] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,138] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,153] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:52:59 kafka | [2025-06-16 11:47:28,153] INFO [Broker id=1] Finished LeaderAndIsr request in 506ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,155] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=qcqke507RcCh6aE31A-Zkw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,160] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,160] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,160] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:47:28,306] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 in Empty state. Created a new member id consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:28,325] INFO [GroupCoordinator 1]: Preparing to rebalance group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 with group instance id None; client reason: need to re-join with the given member-id: consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:29,060] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:29,063] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:31,338] INFO [GroupCoordinator 1]: Stabilized group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 generation 1 (__consumer_offsets-0) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:31,363] INFO [GroupCoordinator 1]: Assignment received from leader consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 for group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:32,065] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:47:32,071] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:48:12,257] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:48:12,258] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:48:15,260] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:48:15,264] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:49:22,961] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:52:59 kafka | [2025-06-16 11:49:22,974] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(6jNsCB5yTgmHWeOqbVmcTg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:49:22,974] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,975] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,986] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,986] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,987] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,987] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,987] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,988] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,989] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,989] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 11:52:59 kafka | [2025-06-16 11:49:22,989] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,992] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:52:59 kafka | [2025-06-16 11:49:22,993] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 11:52:59 kafka | [2025-06-16 11:49:22,994] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:49:22,995] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 11:52:59 kafka | [2025-06-16 11:49:22,995] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(6jNsCB5yTgmHWeOqbVmcTg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,998] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,999] INFO [Broker id=1] Finished LeaderAndIsr request in 11ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:22,999] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=6jNsCB5yTgmHWeOqbVmcTg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:23,000] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:23,001] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 11:52:59 kafka | [2025-06-16 11:49:23,001] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:52:59 kafka | [2025-06-16 11:51:01,349] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:01,350] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:04,352] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:04,356] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:04,471] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:04,472] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:04,473] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:26,957] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:26,958] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:29,958] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:29,961] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:29,967] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:29,967] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:29,968] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:52,395] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:52,396] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:55,398] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:55,401] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:55,407] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:55,407] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:55,408] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:52:59 kafka | [2025-06-16 11:51:59,316] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:51:59,317] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:51:59,322] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 11:52:59 kafka | [2025-06-16 11:51:59,323] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 11:52:59 policy-api | Waiting for policy-db-migrator port 6824... 11:52:59 policy-api | policy-db-migrator (172.17.0.6:6824) open 11:52:59 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 11:52:59 policy-api | 11:52:59 policy-api | . ____ _ __ _ _ 11:52:59 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:52:59 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:52:59 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:52:59 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 11:52:59 policy-api | =========|_|==============|___/=/_/_/_/ 11:52:59 policy-api | 11:52:59 policy-api | :: Spring Boot :: (v3.4.6) 11:52:59 policy-api | 11:52:59 policy-api | [2025-06-16T11:47:06.739+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 11:52:59 policy-api | [2025-06-16T11:47:06.800+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) 11:52:59 policy-api | [2025-06-16T11:47:06.801+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 11:52:59 policy-api | [2025-06-16T11:47:08.176+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:52:59 policy-api | [2025-06-16T11:47:08.351+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 165 ms. Found 6 JPA repository interfaces. 11:52:59 policy-api | [2025-06-16T11:47:08.990+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 11:52:59 policy-api | [2025-06-16T11:47:09.003+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:52:59 policy-api | [2025-06-16T11:47:09.005+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:52:59 policy-api | [2025-06-16T11:47:09.005+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 11:52:59 policy-api | [2025-06-16T11:47:09.042+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 11:52:59 policy-api | [2025-06-16T11:47:09.042+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2182 ms 11:52:59 policy-api | [2025-06-16T11:47:09.358+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:52:59 policy-api | [2025-06-16T11:47:09.434+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 11:52:59 policy-api | [2025-06-16T11:47:09.479+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:52:59 policy-api | [2025-06-16T11:47:09.858+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:52:59 policy-api | [2025-06-16T11:47:09.901+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:52:59 policy-api | [2025-06-16T11:47:10.102+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5342032a 11:52:59 policy-api | [2025-06-16T11:47:10.104+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:52:59 policy-api | [2025-06-16T11:47:10.187+00:00|INFO|pooling|main] HHH10001005: Database info: 11:52:59 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 11:52:59 policy-api | Database driver: undefined/unknown 11:52:59 policy-api | Database version: 16.4 11:52:59 policy-api | Autocommit mode: undefined/unknown 11:52:59 policy-api | Isolation level: undefined/unknown 11:52:59 policy-api | Minimum pool size: undefined/unknown 11:52:59 policy-api | Maximum pool size: undefined/unknown 11:52:59 policy-api | [2025-06-16T11:47:12.253+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:52:59 policy-api | [2025-06-16T11:47:12.257+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:52:59 policy-api | [2025-06-16T11:47:12.874+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 11:52:59 policy-api | [2025-06-16T11:47:13.724+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 11:52:59 policy-api | [2025-06-16T11:47:14.779+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:52:59 policy-api | [2025-06-16T11:47:14.823+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 11:52:59 policy-api | [2025-06-16T11:47:15.461+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 11:52:59 policy-api | [2025-06-16T11:47:15.591+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:52:59 policy-api | [2025-06-16T11:47:15.609+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 11:52:59 policy-api | [2025-06-16T11:47:15.634+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.553 seconds (process running for 10.149) 11:52:59 policy-api | [2025-06-16T11:47:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:52:59 policy-api | [2025-06-16T11:47:39.916+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:52:59 policy-api | [2025-06-16T11:47:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 11:52:59 policy-api | [2025-06-16T11:50:39.142+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: 11:52:59 policy-api | [] 11:52:59 policy-api | [2025-06-16T11:51:55.731+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID 11:52:59 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 11:52:59 policy-api | 11:52:59 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 11:52:59 policy-csit | Run Robot test 11:52:59 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 11:52:59 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 11:52:59 policy-csit | -v POLICY_API_IP:policy-api:6969 11:52:59 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 11:52:59 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 11:52:59 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 11:52:59 policy-csit | -v APEX_IP:policy-apex-pdp:6969 11:52:59 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 11:52:59 policy-csit | -v KAFKA_IP:kafka:9092 11:52:59 policy-csit | -v PROMETHEUS_IP:prometheus:9090 11:52:59 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 11:52:59 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 11:52:59 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 11:52:59 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 11:52:59 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 11:52:59 policy-csit | -v TEMP_FOLDER:/tmp/distribution 11:52:59 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 11:52:59 policy-csit | -v TEST_ENV:docker 11:52:59 policy-csit | -v JAEGER_IP:jaeger:16686 11:52:59 policy-csit | Starting Robot test suites ... 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidatesZonePolicy | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidatesVehiclePolicy | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidatesAbacPolicy | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 11:52:59 policy-csit | 5 tests, 5 passed, 0 failed 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 11:52:59 policy-csit | ------------------------------------------------------------------------------ 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 11:52:59 policy-csit | 5 tests, 5 passed, 0 failed 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 11:52:59 policy-csit | 10 tests, 10 passed, 0 failed 11:52:59 policy-csit | ============================================================================== 11:52:59 policy-csit | Output: /tmp/results/output.xml 11:52:59 policy-csit | Log: /tmp/results/log.html 11:52:59 policy-csit | Report: /tmp/results/report.html 11:52:59 policy-csit | RESULT: 0 11:52:59 policy-db-migrator | Waiting for postgres port 5432... 11:52:59 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 11:52:59 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 11:52:59 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 11:52:59 policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! 11:52:59 policy-db-migrator | Initializing policyadmin... 11:52:59 policy-db-migrator | 321 blocks 11:52:59 policy-db-migrator | Preparing upgrade release version: 0800 11:52:59 policy-db-migrator | Preparing upgrade release version: 0900 11:52:59 policy-db-migrator | Preparing upgrade release version: 1000 11:52:59 policy-db-migrator | Preparing upgrade release version: 1100 11:52:59 policy-db-migrator | Preparing upgrade release version: 1200 11:52:59 policy-db-migrator | Preparing upgrade release version: 1300 11:52:59 policy-db-migrator | Done 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | -------------+--------- 11:52:59 policy-db-migrator | policyadmin | 0 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:52:59 policy-db-migrator | (0 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | upgrade: 0 -> 1300 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0450-pdpgroup.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0470-pdp.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0570-toscadatatype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0630-toscanodetype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0660-toscaparameter.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0670-toscapolicies.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0690-toscapolicy.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0730-toscaproperty.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0770-toscarequirement.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0780-toscarequirements.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0820-toscatrigger.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-pdp.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0210-sequence.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0220-sequence.sql 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0120-toscatrigger.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0140-toscaparameter.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0150-toscaproperty.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-upgrade.sql 11:52:59 policy-db-migrator | msg 11:52:59 policy-db-migrator | --------------------------- 11:52:59 policy-db-migrator | upgrade to 1100 completed 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:52:59 policy-db-migrator | DROP INDEX 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0120-audit_sequence.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 11:52:59 policy-db-migrator | DROP TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | policyadmin: OK: upgrade (1300) 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | -------------+--------- 11:52:59 policy-db-migrator | policyadmin | 1300 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:52:59 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.804089 11:52:59 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.845331 11:52:59 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.886651 11:52:59 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.958198 11:52:59 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.00373 11:52:59 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.058252 11:52:59 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.112268 11:52:59 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.185732 11:52:59 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.233613 11:52:59 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.320445 11:52:59 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.371372 11:52:59 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.442722 11:52:59 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.496885 11:52:59 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.570912 11:52:59 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.622922 11:52:59 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.679369 11:52:59 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.733754 11:52:59 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.798204 11:52:59 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.845993 11:52:59 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.910867 11:52:59 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.950871 11:52:59 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.99255 11:52:59 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.052347 11:52:59 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.098895 11:52:59 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.155127 11:52:59 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.2059 11:52:59 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.262882 11:52:59 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.324801 11:52:59 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.379258 11:52:59 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.435411 11:52:59 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.509748 11:52:59 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.561559 11:52:59 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.64799 11:52:59 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.700869 11:52:59 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.791078 11:52:59 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.839196 11:52:59 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.922784 11:52:59 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.973951 11:52:59 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.024767 11:52:59 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.081791 11:52:59 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.150603 11:52:59 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.207176 11:52:59 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.283362 11:52:59 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.336493 11:52:59 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.417245 11:52:59 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.473243 11:52:59 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.535113 11:52:59 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.593104 11:52:59 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.657836 11:52:59 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.713272 11:52:59 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.783084 11:52:59 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.830891 11:52:59 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.900392 11:52:59 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.951083 11:52:59 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.028323 11:52:59 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.078084 11:52:59 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.144151 11:52:59 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.188981 11:52:59 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.238844 11:52:59 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.290598 11:52:59 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.349564 11:52:59 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.418835 11:52:59 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.46943 11:52:59 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.597282 11:52:59 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.676026 11:52:59 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.728496 11:52:59 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.802918 11:52:59 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.853849 11:52:59 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.928032 11:52:59 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.97744 11:52:59 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.056509 11:52:59 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.11757 11:52:59 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.192459 11:52:59 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.245755 11:52:59 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.2925 11:52:59 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.339619 11:52:59 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.401154 11:52:59 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.454113 11:52:59 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.505223 11:52:59 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.55323 11:52:59 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.603128 11:52:59 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.650231 11:52:59 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.703632 11:52:59 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.754283 11:52:59 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.807965 11:52:59 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.864591 11:52:59 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.916327 11:52:59 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.002321 11:52:59 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.049299 11:52:59 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.128332 11:52:59 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.178779 11:52:59 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.236366 11:52:59 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.288294 11:52:59 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.348401 11:52:59 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.398403 11:52:59 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.483457 11:52:59 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.536913 11:52:59 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.611596 11:52:59 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.661292 11:52:59 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.715382 11:52:59 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.774838 11:52:59 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.845094 11:52:59 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.893444 11:52:59 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.951206 11:52:59 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.001141 11:52:59 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.084426 11:52:59 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.135917 11:52:59 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.185963 11:52:59 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.233153 11:52:59 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.298947 11:52:59 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.351414 11:52:59 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.425094 11:52:59 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.484338 11:52:59 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.542888 11:52:59 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.596392 11:52:59 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.647081 11:52:59 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.699506 11:52:59 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.767157 11:52:59 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251146521100u | 1 | 2025-06-16 11:46:59.809486 11:52:59 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.859487 11:52:59 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.916221 11:52:59 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.967936 11:52:59 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:47:00.025724 11:52:59 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.080355 11:52:59 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.151224 11:52:59 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.198061 11:52:59 policy-db-migrator | (126 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | policyadmin: OK @ 1300 11:52:59 policy-db-migrator | Initializing clampacm... 11:52:59 policy-db-migrator | 97 blocks 11:52:59 policy-db-migrator | Preparing upgrade release version: 1400 11:52:59 policy-db-migrator | Preparing upgrade release version: 1500 11:52:59 policy-db-migrator | Preparing upgrade release version: 1600 11:52:59 policy-db-migrator | Preparing upgrade release version: 1601 11:52:59 policy-db-migrator | Preparing upgrade release version: 1700 11:52:59 policy-db-migrator | Preparing upgrade release version: 1701 11:52:59 policy-db-migrator | Done 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | ----------+--------- 11:52:59 policy-db-migrator | clampacm | 0 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:52:59 policy-db-migrator | (0 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | upgrade: 0 -> 1701 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0500-participant.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0300-participantreplica.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0400-participant.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-message.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-messagejob.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0200-automationcomposition.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0800-participantreplica.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | UPDATE 0 11:52:59 policy-db-migrator | ALTER TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | clampacm: OK: upgrade (1701) 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | ----------+--------- 11:52:59 policy-db-migrator | clampacm | 1701 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:52:59 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:00.933219 11:52:59 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:00.991952 11:52:59 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.047219 11:52:59 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.103356 11:52:59 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.186256 11:52:59 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.239655 11:52:59 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.295668 11:52:59 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.349976 11:52:59 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.417444 11:52:59 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.468193 11:52:59 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.538769 11:52:59 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.582087 11:52:59 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.676816 11:52:59 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.726892 11:52:59 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.770172 11:52:59 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.830385 11:52:59 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.905509 11:52:59 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.958798 11:52:59 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.01427 11:52:59 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.062863 11:52:59 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.110017 11:52:59 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251147001600u | 1 | 2025-06-16 11:47:02.164169 11:52:59 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251147001600u | 1 | 2025-06-16 11:47:02.213874 11:52:59 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251147001601u | 1 | 2025-06-16 11:47:02.286402 11:52:59 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251147001601u | 1 | 2025-06-16 11:47:02.337037 11:52:59 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.412557 11:52:59 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.469013 11:52:59 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.534024 11:52:59 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.591983 11:52:59 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.674877 11:52:59 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.724821 11:52:59 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.796468 11:52:59 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.851806 11:52:59 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.92591 11:52:59 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.97929 11:52:59 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:03.030606 11:52:59 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:03.078821 11:52:59 policy-db-migrator | (37 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | clampacm: OK @ 1701 11:52:59 policy-db-migrator | Initializing pooling... 11:52:59 policy-db-migrator | 4 blocks 11:52:59 policy-db-migrator | Preparing upgrade release version: 1600 11:52:59 policy-db-migrator | Done 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | ---------+--------- 11:52:59 policy-db-migrator | pooling | 0 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:52:59 policy-db-migrator | (0 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | pooling: upgrade available: 0 -> 1600 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | upgrade: 0 -> 1600 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-distributed.locking.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | pooling: OK: upgrade (1600) 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | ---------+--------- 11:52:59 policy-db-migrator | pooling | 1600 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:52:59 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251147031600u | 1 | 2025-06-16 11:47:03.758827 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | pooling: OK @ 1600 11:52:59 policy-db-migrator | Initializing operationshistory... 11:52:59 policy-db-migrator | 6 blocks 11:52:59 policy-db-migrator | Preparing upgrade release version: 1600 11:52:59 policy-db-migrator | Done 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | -------------------+--------- 11:52:59 policy-db-migrator | operationshistory | 0 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:52:59 policy-db-migrator | (0 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | upgrade: 0 -> 1600 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | rc=0 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | > upgrade 0110-operationshistory.sql 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | CREATE INDEX 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | INSERT 0 1 11:52:59 policy-db-migrator | operationshistory: OK: upgrade (1600) 11:52:59 policy-db-migrator | List of databases 11:52:59 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:52:59 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:52:59 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:52:59 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:52:59 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:52:59 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:52:59 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:52:59 policy-db-migrator | (9 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:52:59 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 11:52:59 policy-db-migrator | CREATE TABLE 11:52:59 policy-db-migrator | name | version 11:52:59 policy-db-migrator | -------------------+--------- 11:52:59 policy-db-migrator | operationshistory | 1600 11:52:59 policy-db-migrator | (1 row) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:52:59 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:52:59 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251147041600u | 1 | 2025-06-16 11:47:04.395942 11:52:59 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251147041600u | 1 | 2025-06-16 11:47:04.454825 11:52:59 policy-db-migrator | (2 rows) 11:52:59 policy-db-migrator | 11:52:59 policy-db-migrator | operationshistory: OK @ 1600 11:52:59 policy-opa-pdp | Waiting for kafka port 9092... 11:52:59 policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | Waiting for pap port 6969... 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:52:59 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="###################################### " 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="OPA-PDP: Starting initialisation " 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="###################################### " 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="KAFKA_URL not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PAP_TOPIC not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PATCH_TOPIC not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PATCH_GROUPID not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="API_USER not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="API_PASSWORD not defined, using default value" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 11:52:59 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Username: " 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Password: " 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 11:52:59 policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Configuration module: environment initialised" 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:07.2317+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:07.2319+00:00] Name: opa-7f657737-d4a9-439c-8bcc-1ec79cd614af 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:07.2352+00:00] Starting OPA PDP Service 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:48:12.2358+00:00] HTTP server started 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2368+00:00] Create an instance of OPA Object 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2369+00:00] Configure an instance of OPA Object 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2380+00:00] Topic start :::: policy-pdp-pap 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2380+00:00] Creating Kafka Consumer singleton instance 11:52:59 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-16T11:48:12.2402+00:00] Topic Subscribed: policy-pdp-pap 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2402+00:00] Created SIngleton consumer instance 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:12.2516+00:00] Starting PDP Message Listener..... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:22.2521+00:00] New Ticker started with interval 60000 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:48:32.2602+00:00] After registration successful delay 11:52:59 policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.2531+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.2532+00:00] Sending Heartbeat ... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.2812+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.2813+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.2813+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8919+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8928+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] Policy Is Allowed: slice.capacity.check 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] Validating properties data for policy: slice.capacity.check 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8934+00:00] Validating properties policy for policy: slice.capacity.check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.8934+00:00] Validation successful for policy: slice.capacity.check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.8940+00:00] Directory created: /opt/policies/slice/capacity/check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.8941+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.8944+00:00] Directory created: /opt/data/node/slice/capacity/check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.8944+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.8944+00:00] Before calling combinedoutput 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9134+00:00] Bundle Built Sucessfully.... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9162+00:00] storage not found creating : /node 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9163+00:00] storage not found creating : /node/slice 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9164+00:00] storage not found creating : /node/slice/capacity 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9165+00:00] storage not found creating : /node/slice/capacity/check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9167+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9168+00:00] Loaded Policy: slice.capacity.check 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9170+00:00] Processed policies_to_be_deployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9171+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9174+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9175+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9176+00:00] 120000 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9178+00:00] New Ticker started with interval 120000 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9268+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9270+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9272+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9571+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9573+00:00] messageType: PDP_STATE_CHANGE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9575+00:00] PDP STATE CHANGE message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9576+00:00] State change from PASSIVE To : ACTIVE 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9577+00:00] Sending PDP Status With State Change response 11:52:59 policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9580+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:22.9581+00:00] PDP_STATUS With State Change Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9658+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9659+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9659+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2338+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2340+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2344+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:23.2346+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:49:23 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2349+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:49:23.2350+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2351+00:00] 120000 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2424+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2426+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2427+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | 2025/06/16 11:50:22 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:22.2533+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:22.2534+00:00] Sending Heartbeat ... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:22.2626+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:22.2627+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:22.2627+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | WARN[2025-06-16T11:50:38.9323+00:00] Invalid or Missing Request ID 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:38.9324+00:00] Received Health Check message 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:38.9393+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:38.9394+00:00] datapath to get Data : / 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:38.9396+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3110+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3113+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3116+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Check if Policy is Already Deployed: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3117+00:00] Policy is new and should be deployed: zoneB 1.0.6 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Policy Is Allowed: zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Validating properties data for policy: zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Validating properties policy for policy: zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3117+00:00] Validation successful for policy: zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Directory created: /opt/policies/zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Directory created: /opt/data/node/zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3120+00:00] Data file saved: /opt/data/node/zoneB/data.json 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3120+00:00] Before calling combinedoutput 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3376+00:00] Bundle Built Sucessfully.... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3442+00:00] storage not found creating : /node/zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3444+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.zoneB" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "zoneB" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "zoneB", 11:52:59 policy-opa-pdp | "policy-version": "1.0.6" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3444+00:00] Loaded Policy: zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3445+00:00] Processed policies_to_be_deployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3445+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3446+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:50:40.3446+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3446+00:00] 0 11:52:59 policy-opa-pdp | 2025/06/16 11:50:40 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3524+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3525+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3525+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.4996+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.4997+00:00] datapath to get Data : /node/zoneB/zone 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.4998+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5099+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5100+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5104+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5105+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"aa050755-2cd8-465c-837f-0da821e38d6e","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":930,"timer_rego_query_compile_ns":151243,"timer_rego_query_eval_ns":542010,"timer_rego_query_parse_ns":104322,"timer_sdk_decision_eval_ns":1019410},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T11:51:04Z","timestamp":"2025-06-16T11:51:04.510601634Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5123+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "aa050755-2cd8-465c-837f-0da821e38d6e", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_log_view": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "has_zone_access": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "access": "granted", 11:52:59 policy-opa-pdp | "user": "user1" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5248+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5249+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5252+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | WARN[2025-06-16T11:51:04.5253+00:00] Policy Name zoeB does not exist 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5319+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5319+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5322+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5322+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"d2ba74e2-60ce-4d1b-9928-429147f823f3","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":720,"timer_rego_query_eval_ns":435760,"timer_sdk_decision_eval_ns":547622},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T11:51:04Z","timestamp":"2025-06-16T11:51:04.532313602Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.5331+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "d2ba74e2-60ce-4d1b-9928-429147f823f3", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_log_view": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "has_zone_access": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "access": "granted", 11:52:59 policy-opa-pdp | "user": "user1" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8246+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8247+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8249+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8250+00:00] Found Policies to be undeployed 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8250+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8251+00:00] Deleting Policy from OPA : /zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8269+00:00] Removing policy directory: /opt/policies/zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8271+00:00] Deleting data from OPA : /node/zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8272+00:00] Analyzing dataPath: /node/zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8273+00:00] Path segments: [ node zoneB] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8273+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8274+00:00] Removing data directory: /opt/data/node/zoneB 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8276+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8276+00:00] Policies Map After Undeployment : { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8277+00:00] Processed policies_to_be_undeployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8278+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:51:04 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8279+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:04.8280+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8280+00:00] 0 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8352+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8355+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8355+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9180+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9182+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9184+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9185+00:00] Check if Policy is Already Deployed: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9186+00:00] Policy is new and should be deployed: vehicle 1.0.6 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9186+00:00] Policy Is Allowed: vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9187+00:00] Validating properties data for policy: vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9188+00:00] Validating properties policy for policy: vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9188+00:00] Validation successful for policy: vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9190+00:00] Directory created: /opt/policies/vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9192+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9193+00:00] Directory created: /opt/data/node/vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9194+00:00] Data file saved: /opt/data/node/vehicle/data.json 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9194+00:00] Before calling combinedoutput 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9455+00:00] Bundle Built Sucessfully.... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9511+00:00] storage not found creating : /node/vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9512+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.vehicle" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "vehicle" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "vehicle", 11:52:59 policy-opa-pdp | "policy-version": "1.0.6" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9513+00:00] Loaded Policy: vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9513+00:00] Processed policies_to_be_deployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9513+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:51:05 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9514+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:05.9514+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9514+00:00] 0 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | 2025/06/16 11:51:22 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:22.9189+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:22.9191+00:00] Sending Heartbeat ... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:22.9268+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:22.9278+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:22.9278+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9860+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9861+00:00] datapath to get Data : /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9861+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9973+00:00] PDP received a request to update data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9978+00:00] All fields are valid! 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9979+00:00] data : [map[op:add path:/round value:trail]] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9979+00:00] policy name : vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9979+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9980+00:00] dirParts : [ node vehicle] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9983+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] root: /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] path : round 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9984+00:00] calling ParsePatchPathEscaped to check the path 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] No path conflicts detected 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:29.9985+00:00] Updated the data in the corresponding path successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0058+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0059+00:00] datapath to get Data : /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0060+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0153+00:00] PDP received a request to update data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0157+00:00] All fields are valid! 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0158+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0158+00:00] policy name : vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0160+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0160+00:00] dirParts : [ node vehicle] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0161+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0162+00:00] root: /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0162+00:00] path : round 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0163+00:00] calling ParsePatchPathEscaped to check the path 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0165+00:00] No path conflicts detected 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0166+00:00] Updated the data in the corresponding path successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0231+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0231+00:00] datapath to get Data : /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0233+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0328+00:00] PDP received a request to update data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0333+00:00] All fields are valid! 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0333+00:00] data : [map[op:remove path:/round]] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0333+00:00] policy name : vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0335+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0335+00:00] dirParts : [ node vehicle] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0337+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0338+00:00] root: /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0339+00:00] path : round 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0340+00:00] calling ParsePatchPathEscaped to check the path 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0341+00:00] No path conflicts detected 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0343+00:00] Updated the data in the corresponding path successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.0406+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0407+00:00] datapath to get Data : /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0408+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0498+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0498+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0502+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0503+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"73c2af41-a1a3-424f-8114-190e065ca726","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":770,"timer_rego_query_compile_ns":138783,"timer_rego_query_eval_ns":462099,"timer_rego_query_parse_ns":116452,"timer_sdk_decision_eval_ns":1011009},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T11:51:30Z","timestamp":"2025-06-16T11:51:30.050541644Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0519+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "73c2af41-a1a3-424f-8114-190e065ca726", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_granted": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "user_has_vehicle_access": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "status": "available", 11:52:59 policy-opa-pdp | "type": "car" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0590+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0591+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0594+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | WARN[2025-06-16T11:51:30.0596+00:00] Policy Name vehile does not exist 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0679+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0680+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0684+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0685+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"f5b941d3-e5fb-49e6-861f-1c93b68ee8a5","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":990,"timer_rego_query_eval_ns":438378,"timer_sdk_decision_eval_ns":617431},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T11:51:30Z","timestamp":"2025-06-16T11:51:30.068750787Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.0695+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "f5b941d3-e5fb-49e6-861f-1c93b68ee8a5", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_granted": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "user_has_vehicle_access": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "status": "available", 11:52:59 policy-opa-pdp | "type": "car" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3073+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3074+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3079+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3079+00:00] Found Policies to be undeployed 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3079+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3080+00:00] Deleting Policy from OPA : /vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3105+00:00] Removing policy directory: /opt/policies/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Deleting data from OPA : /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Analyzing dataPath: /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Path segments: [ node vehicle] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3109+00:00] Removing data directory: /opt/data/node/vehicle 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3111+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3111+00:00] Policies Map After Undeployment : { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3113+00:00] Processed policies_to_be_undeployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3114+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:51:30 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3115+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.3115+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3116+00:00] 0 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3191+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3192+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3192+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.6900+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.6901+00:00] datapath to get Data : /node/vehicle 11:52:59 policy-opa-pdp | WARN[2025-06-16T11:51:30.6901+00:00] Error in reading data under /node/vehicle path 11:52:59 policy-opa-pdp | ERRO[2025-06-16T11:51:30.6903+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.7003+00:00] PDP received a request to update data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.7005+00:00] All fields are valid! 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.7006+00:00] data : [map[op:remove path:/round]] 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:30.7006+00:00] policy name : vehicle 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:30.7006+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 11:52:59 policy-opa-pdp | ERRO[2025-06-16T11:51:30.7007+00:00] Policy associated with the patch request does not exists 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3605+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3608+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3610+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3610+00:00] Check if Policy is Already Deployed: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3611+00:00] Policy is new and should be deployed: abac 1.0.7 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Policy Is Allowed: abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Validating properties data for policy: abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Validating properties policy for policy: abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3612+00:00] Validation successful for policy: abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3615+00:00] Directory created: /opt/policies/abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3616+00:00] Policy file saved: /opt/policies/abac/policy.rego 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3617+00:00] Directory created: /opt/data/node/abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3618+00:00] Data file saved: /opt/data/node/abac/data.json 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3619+00:00] Before calling combinedoutput 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3779+00:00] Bundle Built Sucessfully.... 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3835+00:00] storage not found creating : /node/abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3838+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.abac" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "abac" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "abac", 11:52:59 policy-opa-pdp | "policy-version": "1.0.7" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3838+00:00] Loaded Policy: abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3839+00:00] Processed policies_to_be_deployed successfully 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3840+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | 2025/06/16 11:51:31 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3841+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:31.3842+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3843+00:00] 0 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3912+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3913+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3913+00:00] discarding event of type PDP_STATUS 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:55.4302+00:00] PDP received a request to get data through API 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4303+00:00] datapath to get Data : /node/abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4305+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4406+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4407+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4410+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4411+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"0c9b3ab6-7c7e-43b1-9ad5-3430e944419f","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1130,"timer_rego_query_compile_ns":220034,"timer_rego_query_eval_ns":1332106,"timer_rego_query_parse_ns":126133,"timer_sdk_decision_eval_ns":1909667},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T11:51:55Z","timestamp":"2025-06-16T11:51:55.441190716Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4439+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "0c9b3ab6-7c7e-43b1-9ad5-3430e944419f", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_read": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "viewable_sensor_data": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Galle", 11:52:59 policy-opa-pdp | "precipitation": "500 mm", 11:52:59 policy-opa-pdp | "temperature": "35 C", 11:52:59 policy-opa-pdp | "windspeed": "7.2 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Jaffna", 11:52:59 policy-opa-pdp | "precipitation": "300 mm", 11:52:59 policy-opa-pdp | "temperature": "-5 C", 11:52:59 policy-opa-pdp | "windspeed": "3.8 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Nuwara Eliya", 11:52:59 policy-opa-pdp | "precipitation": "600 mm", 11:52:59 policy-opa-pdp | "temperature": "25 C", 11:52:59 policy-opa-pdp | "windspeed": "4.0 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Trincomalee", 11:52:59 policy-opa-pdp | "precipitation": "1000 mm", 11:52:59 policy-opa-pdp | "temperature": "20 C", 11:52:59 policy-opa-pdp | "windspeed": "5.0 m/s" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4515+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4517+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4521+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | WARN[2025-06-16T11:51:55.4523+00:00] Policy Name abc does not exist 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4598+00:00] PDP received a decision request. 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4598+00:00] Headers processed for requestId: Unknown 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4601+00:00] Validation successful for request fields 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4602+00:00] SDK making a decision 11:52:59 policy-opa-pdp | {"decision_id":"130ce0d0-f0d7-43f1-8214-b9f4193258db","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":900,"timer_rego_query_eval_ns":884346,"timer_sdk_decision_eval_ns":997329},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T11:51:55Z","timestamp":"2025-06-16T11:51:55.460292991Z","type":"openpolicyagent.org/decision_logs"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.4616+00:00] RAW opa Decision output: 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "ID": "130ce0d0-f0d7-43f1-8214-b9f4193258db", 11:52:59 policy-opa-pdp | "Result": { 11:52:59 policy-opa-pdp | "action_is_read": true, 11:52:59 policy-opa-pdp | "allow": true, 11:52:59 policy-opa-pdp | "viewable_sensor_data": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Galle", 11:52:59 policy-opa-pdp | "precipitation": "500 mm", 11:52:59 policy-opa-pdp | "temperature": "35 C", 11:52:59 policy-opa-pdp | "windspeed": "7.2 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Jaffna", 11:52:59 policy-opa-pdp | "precipitation": "300 mm", 11:52:59 policy-opa-pdp | "temperature": "-5 C", 11:52:59 policy-opa-pdp | "windspeed": "3.8 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Nuwara Eliya", 11:52:59 policy-opa-pdp | "precipitation": "600 mm", 11:52:59 policy-opa-pdp | "temperature": "25 C", 11:52:59 policy-opa-pdp | "windspeed": "4.0 m/s" 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "location": "Trincomalee", 11:52:59 policy-opa-pdp | "precipitation": "1000 mm", 11:52:59 policy-opa-pdp | "temperature": "20 C", 11:52:59 policy-opa-pdp | "windspeed": "5.0 m/s" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | }, 11:52:59 policy-opa-pdp | "Provenance": { 11:52:59 policy-opa-pdp | "version": "1.1.0", 11:52:59 policy-opa-pdp | "build_commit": "", 11:52:59 policy-opa-pdp | "build_timestamp": "", 11:52:59 policy-opa-pdp | "build_hostname": "" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9965+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9966+00:00] messageType: PDP_UPDATE 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9968+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:55.9968+00:00] Found Policies to be undeployed 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:55.9968+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9969+00:00] Deleting Policy from OPA : /abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9993+00:00] Removing policy directory: /opt/policies/abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Deleting data from OPA : /node/abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Analyzing dataPath: /node/abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Path segments: [ node abac] 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0000+00:00] Removing data directory: /opt/data/node/abac 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:56.0002+00:00] PoliciesDeployed Map: { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0002+00:00] Policies Map After Undeployment : { 11:52:59 policy-opa-pdp | "deployed_policies_dict": [ 11:52:59 policy-opa-pdp | { 11:52:59 policy-opa-pdp | "data": [ 11:52:59 policy-opa-pdp | "node.slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy": [ 11:52:59 policy-opa-pdp | "slice.capacity.check" 11:52:59 policy-opa-pdp | ], 11:52:59 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:52:59 policy-opa-pdp | "policy-version": "1.0.0" 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | ] 11:52:59 policy-opa-pdp | } 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:56.0003+00:00] Processed policies_to_be_undeployed successfully 11:52:59 policy-opa-pdp | 2025/06/16 11:51:56 KafkaProducer or producer produce message 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:56.0004+00:00] Sending PDP Status With Update Response 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0005+00:00] [OUT|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | INFO[2025-06-16T11:51:56.0007+00:00] PDP_STATUS Message Sent Successfully 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0007+00:00] 0 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0088+00:00] [IN|KAFKA|policy-pdp-pap] 11:52:59 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0089+00:00] messageType: PDP_STATUS 11:52:59 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0089+00:00] discarding event of type PDP_STATUS 11:53:00 policy-pap | Waiting for api port 6969... 11:53:00 policy-pap | api (172.17.0.7:6969) open 11:53:00 policy-pap | Waiting for kafka port 9092... 11:53:00 policy-pap | kafka (172.17.0.5:9092) open 11:53:00 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 11:53:00 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 11:53:00 policy-pap | 11:53:00 policy-pap | . ____ _ __ _ _ 11:53:00 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:53:00 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:53:00 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:53:00 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 11:53:00 policy-pap | =========|_|==============|___/=/_/_/_/ 11:53:00 policy-pap | 11:53:00 policy-pap | :: Spring Boot :: (v3.4.6) 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:18.165+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 60 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 11:53:00 policy-pap | [2025-06-16T11:47:18.167+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 11:53:00 policy-pap | [2025-06-16T11:47:19.518+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:53:00 policy-pap | [2025-06-16T11:47:19.602+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 7 JPA repository interfaces. 11:53:00 policy-pap | [2025-06-16T11:47:20.517+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 11:53:00 policy-pap | [2025-06-16T11:47:20.530+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:53:00 policy-pap | [2025-06-16T11:47:20.532+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:53:00 policy-pap | [2025-06-16T11:47:20.532+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 11:53:00 policy-pap | [2025-06-16T11:47:20.575+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 11:53:00 policy-pap | [2025-06-16T11:47:20.575+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2353 ms 11:53:00 policy-pap | [2025-06-16T11:47:21.020+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:53:00 policy-pap | [2025-06-16T11:47:21.095+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 11:53:00 policy-pap | [2025-06-16T11:47:21.151+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:53:00 policy-pap | [2025-06-16T11:47:21.537+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:53:00 policy-pap | [2025-06-16T11:47:21.578+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:53:00 policy-pap | [2025-06-16T11:47:21.795+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@c96c497 11:53:00 policy-pap | [2025-06-16T11:47:21.797+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:53:00 policy-pap | [2025-06-16T11:47:21.880+00:00|INFO|pooling|main] HHH10001005: Database info: 11:53:00 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 11:53:00 policy-pap | Database driver: undefined/unknown 11:53:00 policy-pap | Database version: 16.4 11:53:00 policy-pap | Autocommit mode: undefined/unknown 11:53:00 policy-pap | Isolation level: undefined/unknown 11:53:00 policy-pap | Minimum pool size: undefined/unknown 11:53:00 policy-pap | Maximum pool size: undefined/unknown 11:53:00 policy-pap | [2025-06-16T11:47:23.741+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:53:00 policy-pap | [2025-06-16T11:47:23.745+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:53:00 policy-pap | [2025-06-16T11:47:24.974+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:00 policy-pap | allow.auto.create.topics = true 11:53:00 policy-pap | auto.commit.interval.ms = 5000 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | auto.offset.reset = latest 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | check.crcs = true 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-1 11:53:00 policy-pap | client.rack = 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | default.api.timeout.ms = 60000 11:53:00 policy-pap | enable.auto.commit = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | exclude.internal.topics = true 11:53:00 policy-pap | fetch.max.bytes = 52428800 11:53:00 policy-pap | fetch.max.wait.ms = 500 11:53:00 policy-pap | fetch.min.bytes = 1 11:53:00 policy-pap | group.id = 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 11:53:00 policy-pap | group.instance.id = null 11:53:00 policy-pap | group.protocol = classic 11:53:00 policy-pap | group.remote.assignor = null 11:53:00 policy-pap | heartbeat.interval.ms = 3000 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | internal.leave.group.on.close = true 11:53:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:00 policy-pap | isolation.level = read_uncommitted 11:53:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | max.partition.fetch.bytes = 1048576 11:53:00 policy-pap | max.poll.interval.ms = 300000 11:53:00 policy-pap | max.poll.records = 500 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:00 policy-pap | receive.buffer.bytes = 65536 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | session.timeout.ms = 45000 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:25.026+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074445163 11:53:00 policy-pap | [2025-06-16T11:47:25.166+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-1, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Subscribed to topic(s): policy-pdp-pap 11:53:00 policy-pap | [2025-06-16T11:47:25.167+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:00 policy-pap | allow.auto.create.topics = true 11:53:00 policy-pap | auto.commit.interval.ms = 5000 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | auto.offset.reset = latest 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | check.crcs = true 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = consumer-policy-pap-2 11:53:00 policy-pap | client.rack = 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | default.api.timeout.ms = 60000 11:53:00 policy-pap | enable.auto.commit = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | exclude.internal.topics = true 11:53:00 policy-pap | fetch.max.bytes = 52428800 11:53:00 policy-pap | fetch.max.wait.ms = 500 11:53:00 policy-pap | fetch.min.bytes = 1 11:53:00 policy-pap | group.id = policy-pap 11:53:00 policy-pap | group.instance.id = null 11:53:00 policy-pap | group.protocol = classic 11:53:00 policy-pap | group.remote.assignor = null 11:53:00 policy-pap | heartbeat.interval.ms = 3000 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | internal.leave.group.on.close = true 11:53:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:00 policy-pap | isolation.level = read_uncommitted 11:53:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | max.partition.fetch.bytes = 1048576 11:53:00 policy-pap | max.poll.interval.ms = 300000 11:53:00 policy-pap | max.poll.records = 500 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:00 policy-pap | receive.buffer.bytes = 65536 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | session.timeout.ms = 45000 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:25.167+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074445175 11:53:00 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:53:00 policy-pap | [2025-06-16T11:47:25.489+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 11:53:00 policy-pap | [2025-06-16T11:47:25.616+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:53:00 policy-pap | [2025-06-16T11:47:25.692+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 11:53:00 policy-pap | [2025-06-16T11:47:25.914+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 11:53:00 policy-pap | [2025-06-16T11:47:26.761+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 11:53:00 policy-pap | [2025-06-16T11:47:26.872+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:53:00 policy-pap | [2025-06-16T11:47:26.892+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 11:53:00 policy-pap | [2025-06-16T11:47:26.913+00:00|INFO|ServiceManager|main] Policy PAP starting 11:53:00 policy-pap | [2025-06-16T11:47:26.913+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 11:53:00 policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 11:53:00 policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 11:53:00 policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 11:53:00 policy-pap | [2025-06-16T11:47:26.915+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 11:53:00 policy-pap | [2025-06-16T11:47:26.915+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 11:53:00 policy-pap | [2025-06-16T11:47:26.916+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@494e502c 11:53:00 policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:00 policy-pap | allow.auto.create.topics = true 11:53:00 policy-pap | auto.commit.interval.ms = 5000 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | auto.offset.reset = latest 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | check.crcs = true 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3 11:53:00 policy-pap | client.rack = 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | default.api.timeout.ms = 60000 11:53:00 policy-pap | enable.auto.commit = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | exclude.internal.topics = true 11:53:00 policy-pap | fetch.max.bytes = 52428800 11:53:00 policy-pap | fetch.max.wait.ms = 500 11:53:00 policy-pap | fetch.min.bytes = 1 11:53:00 policy-pap | group.id = 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 11:53:00 policy-pap | group.instance.id = null 11:53:00 policy-pap | group.protocol = classic 11:53:00 policy-pap | group.remote.assignor = null 11:53:00 policy-pap | heartbeat.interval.ms = 3000 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | internal.leave.group.on.close = true 11:53:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:00 policy-pap | isolation.level = read_uncommitted 11:53:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | max.partition.fetch.bytes = 1048576 11:53:00 policy-pap | max.poll.interval.ms = 300000 11:53:00 policy-pap | max.poll.records = 500 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:00 policy-pap | receive.buffer.bytes = 65536 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | session.timeout.ms = 45000 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446934 11:53:00 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Subscribed to topic(s): policy-pdp-pap 11:53:00 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 11:53:00 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@65450878 11:53:00 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:00 policy-pap | allow.auto.create.topics = true 11:53:00 policy-pap | auto.commit.interval.ms = 5000 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | auto.offset.reset = latest 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | check.crcs = true 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = consumer-policy-pap-4 11:53:00 policy-pap | client.rack = 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | default.api.timeout.ms = 60000 11:53:00 policy-pap | enable.auto.commit = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | exclude.internal.topics = true 11:53:00 policy-pap | fetch.max.bytes = 52428800 11:53:00 policy-pap | fetch.max.wait.ms = 500 11:53:00 policy-pap | fetch.min.bytes = 1 11:53:00 policy-pap | group.id = policy-pap 11:53:00 policy-pap | group.instance.id = null 11:53:00 policy-pap | group.protocol = classic 11:53:00 policy-pap | group.remote.assignor = null 11:53:00 policy-pap | heartbeat.interval.ms = 3000 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | internal.leave.group.on.close = true 11:53:00 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:00 policy-pap | isolation.level = read_uncommitted 11:53:00 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | max.partition.fetch.bytes = 1048576 11:53:00 policy-pap | max.poll.interval.ms = 300000 11:53:00 policy-pap | max.poll.records = 500 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:00 policy-pap | receive.buffer.bytes = 65536 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | session.timeout.ms = 45000 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:26.936+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446941 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|ServiceManager|main] Policy PAP starting topics 11:53:00 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.942+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.942+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e2695bc6-c57e-4b98-b4cd-fa67d17e9724, alive=false, publisher=null]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.953+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:53:00 policy-pap | acks = -1 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | batch.size = 16384 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | buffer.memory = 33554432 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = producer-1 11:53:00 policy-pap | compression.gzip.level = -1 11:53:00 policy-pap | compression.lz4.level = 9 11:53:00 policy-pap | compression.type = none 11:53:00 policy-pap | compression.zstd.level = 3 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | delivery.timeout.ms = 120000 11:53:00 policy-pap | enable.idempotence = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:00 policy-pap | linger.ms = 0 11:53:00 policy-pap | max.block.ms = 60000 11:53:00 policy-pap | max.in.flight.requests.per.connection = 5 11:53:00 policy-pap | max.request.size = 1048576 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.max.idle.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partitioner.adaptive.partitioning.enable = true 11:53:00 policy-pap | partitioner.availability.timeout.ms = 0 11:53:00 policy-pap | partitioner.class = null 11:53:00 policy-pap | partitioner.ignore.keys = false 11:53:00 policy-pap | receive.buffer.bytes = 32768 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retries = 2147483647 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | transaction.timeout.ms = 60000 11:53:00 policy-pap | transactional.id = null 11:53:00 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:26.954+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:26.966+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:53:00 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446982 11:53:00 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e2695bc6-c57e-4b98-b4cd-fa67d17e9724, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:53:00 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4264fa19-4f07-49fd-b544-73b85dbe7390, alive=false, publisher=null]]: starting 11:53:00 policy-pap | [2025-06-16T11:47:26.983+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:53:00 policy-pap | acks = -1 11:53:00 policy-pap | auto.include.jmx.reporter = true 11:53:00 policy-pap | batch.size = 16384 11:53:00 policy-pap | bootstrap.servers = [kafka:9092] 11:53:00 policy-pap | buffer.memory = 33554432 11:53:00 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:00 policy-pap | client.id = producer-2 11:53:00 policy-pap | compression.gzip.level = -1 11:53:00 policy-pap | compression.lz4.level = 9 11:53:00 policy-pap | compression.type = none 11:53:00 policy-pap | compression.zstd.level = 3 11:53:00 policy-pap | connections.max.idle.ms = 540000 11:53:00 policy-pap | delivery.timeout.ms = 120000 11:53:00 policy-pap | enable.idempotence = true 11:53:00 policy-pap | enable.metrics.push = true 11:53:00 policy-pap | interceptor.classes = [] 11:53:00 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:00 policy-pap | linger.ms = 0 11:53:00 policy-pap | max.block.ms = 60000 11:53:00 policy-pap | max.in.flight.requests.per.connection = 5 11:53:00 policy-pap | max.request.size = 1048576 11:53:00 policy-pap | metadata.max.age.ms = 300000 11:53:00 policy-pap | metadata.max.idle.ms = 300000 11:53:00 policy-pap | metadata.recovery.strategy = none 11:53:00 policy-pap | metric.reporters = [] 11:53:00 policy-pap | metrics.num.samples = 2 11:53:00 policy-pap | metrics.recording.level = INFO 11:53:00 policy-pap | metrics.sample.window.ms = 30000 11:53:00 policy-pap | partitioner.adaptive.partitioning.enable = true 11:53:00 policy-pap | partitioner.availability.timeout.ms = 0 11:53:00 policy-pap | partitioner.class = null 11:53:00 policy-pap | partitioner.ignore.keys = false 11:53:00 policy-pap | receive.buffer.bytes = 32768 11:53:00 policy-pap | reconnect.backoff.max.ms = 1000 11:53:00 policy-pap | reconnect.backoff.ms = 50 11:53:00 policy-pap | request.timeout.ms = 30000 11:53:00 policy-pap | retries = 2147483647 11:53:00 policy-pap | retry.backoff.max.ms = 1000 11:53:00 policy-pap | retry.backoff.ms = 100 11:53:00 policy-pap | sasl.client.callback.handler.class = null 11:53:00 policy-pap | sasl.jaas.config = null 11:53:00 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:00 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:00 policy-pap | sasl.kerberos.service.name = null 11:53:00 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:00 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:00 policy-pap | sasl.login.callback.handler.class = null 11:53:00 policy-pap | sasl.login.class = null 11:53:00 policy-pap | sasl.login.connect.timeout.ms = null 11:53:00 policy-pap | sasl.login.read.timeout.ms = null 11:53:00 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:00 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:00 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:00 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:00 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.mechanism = GSSAPI 11:53:00 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:00 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:00 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:00 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:00 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:00 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:00 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:00 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:00 policy-pap | security.protocol = PLAINTEXT 11:53:00 policy-pap | security.providers = null 11:53:00 policy-pap | send.buffer.bytes = 131072 11:53:00 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:00 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:00 policy-pap | ssl.cipher.suites = null 11:53:00 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:00 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:00 policy-pap | ssl.engine.factory.class = null 11:53:00 policy-pap | ssl.key.password = null 11:53:00 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:00 policy-pap | ssl.keystore.certificate.chain = null 11:53:00 policy-pap | ssl.keystore.key = null 11:53:00 policy-pap | ssl.keystore.location = null 11:53:00 policy-pap | ssl.keystore.password = null 11:53:00 policy-pap | ssl.keystore.type = JKS 11:53:00 policy-pap | ssl.protocol = TLSv1.3 11:53:00 policy-pap | ssl.provider = null 11:53:00 policy-pap | ssl.secure.random.implementation = null 11:53:00 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:00 policy-pap | ssl.truststore.certificates = null 11:53:00 policy-pap | ssl.truststore.location = null 11:53:00 policy-pap | ssl.truststore.password = null 11:53:00 policy-pap | ssl.truststore.type = JKS 11:53:00 policy-pap | transaction.timeout.ms = 60000 11:53:00 policy-pap | transactional.id = null 11:53:00 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:00 policy-pap | 11:53:00 policy-pap | [2025-06-16T11:47:26.983+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:00 policy-pap | [2025-06-16T11:47:26.984+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 11:53:00 policy-pap | [2025-06-16T11:47:26.988+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:00 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:00 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446988 11:53:00 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4264fa19-4f07-49fd-b544-73b85dbe7390, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:53:00 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 11:53:00 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 11:53:00 policy-pap | [2025-06-16T11:47:26.991+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 11:53:00 policy-pap | [2025-06-16T11:47:26.991+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 11:53:00 policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 11:53:00 policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 11:53:00 policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 11:53:00 policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|TimerManager|Thread-9] timer manager update started 11:53:00 policy-pap | [2025-06-16T11:47:26.993+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 11:53:00 policy-pap | [2025-06-16T11:47:26.993+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 11:53:00 policy-pap | [2025-06-16T11:47:26.995+00:00|INFO|ServiceManager|main] Policy PAP started 11:53:00 policy-pap | [2025-06-16T11:47:26.995+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.561 seconds (process running for 10.113) 11:53:00 policy-pap | [2025-06-16T11:47:27.409+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A 11:53:00 policy-pap | [2025-06-16T11:47:27.409+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:53:00 policy-pap | [2025-06-16T11:47:27.409+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A 11:53:00 policy-pap | [2025-06-16T11:47:27.411+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A 11:53:00 policy-pap | [2025-06-16T11:47:27.440+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 11:53:00 policy-pap | [2025-06-16T11:47:27.441+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:00 policy-pap | [2025-06-16T11:47:27.441+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A 11:53:00 policy-pap | [2025-06-16T11:47:27.441+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 11:53:00 policy-pap | [2025-06-16T11:47:27.559+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:00 policy-pap | [2025-06-16T11:47:27.576+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:00 policy-pap | [2025-06-16T11:47:28.274+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:53:00 policy-pap | [2025-06-16T11:47:28.284+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] (Re-)joining group 11:53:00 policy-pap | [2025-06-16T11:47:28.313+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Request joining group due to: need to re-join with the given member-id: consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 11:53:00 policy-pap | [2025-06-16T11:47:28.314+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] (Re-)joining group 11:53:00 policy-pap | [2025-06-16T11:47:29.052+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:53:00 policy-pap | [2025-06-16T11:47:29.055+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:53:00 policy-pap | [2025-06-16T11:47:29.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 11:53:00 policy-pap | [2025-06-16T11:47:29.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:53:00 policy-pap | [2025-06-16T11:47:31.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573', protocol='range'} 11:53:00 policy-pap | [2025-06-16T11:47:31.354+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Finished assignment for group at generation 1: {consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573=Assignment(partitions=[policy-pdp-pap-0])} 11:53:00 policy-pap | [2025-06-16T11:47:31.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573', protocol='range'} 11:53:00 policy-pap | [2025-06-16T11:47:31.404+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:53:00 policy-pap | [2025-06-16T11:47:31.409+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Adding newly assigned partitions: policy-pdp-pap-0 11:53:00 policy-pap | [2025-06-16T11:47:31.429+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Found no committed offset for partition policy-pdp-pap-0 11:53:00 policy-pap | [2025-06-16T11:47:31.452+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:53:00 policy-pap | [2025-06-16T11:47:32.067+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1', protocol='range'} 11:53:00 policy-pap | [2025-06-16T11:47:32.068+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1=Assignment(partitions=[policy-pdp-pap-0])} 11:53:00 policy-pap | [2025-06-16T11:47:32.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1', protocol='range'} 11:53:00 policy-pap | [2025-06-16T11:47:32.075+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:53:00 policy-pap | [2025-06-16T11:47:32.075+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 11:53:00 policy-pap | [2025-06-16T11:47:32.077+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 11:53:00 policy-pap | [2025-06-16T11:47:32.079+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:53:00 policy-pap | [2025-06-16T11:47:41.609+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:53:00 policy-pap | [2025-06-16T11:47:41.609+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:53:00 policy-pap | [2025-06-16T11:47:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 11:53:00 policy-pap | [2025-06-16T11:49:22.294+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 11:53:00 policy-pap | [] 11:53:00 policy-pap | [2025-06-16T11:49:22.295+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:22.295+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:22.302+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:00 policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:49:22.844+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] 11:53:00 policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] 11:53:00 policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:49:22.848+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:22.895+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:22.896+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:49:22.896+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:22.897+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:49:22.928+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:22.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 22460fd0-d018-424b-9e75-a16791862685 11:53:00 policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] 11:53:00 policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:49:22.944+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af start publishing next request 11:53:00 policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting 11:53:00 policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting listener 11:53:00 policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:00 policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting timer 11:53:00 policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] 11:53:00 policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting enqueue 11:53:00 policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange started 11:53:00 policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] 11:53:00 policy-pap | [2025-06-16T11:49:22.948+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:22.960+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:22.960+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 11:53:00 policy-pap | [2025-06-16T11:49:22.968+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:22.969+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8379edd2-a036-4816-ae54-58c6e71b95ed 11:53:00 policy-pap | [2025-06-16T11:49:22.973+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 11:53:00 policy-pap | [2025-06-16T11:49:23.226+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:23.226+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping timer 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping listener 11:53:00 policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopped 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange successful 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af start publishing next request 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a031b63c-0de0-4623-977c-96546b52eeee, expireMs=1750074593229] 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a031b63c-0de0-4623-977c-96546b52eeee, expireMs=1750074593229] 11:53:00 policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:49:23.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:49:23.246+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:49:23.247+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a031b63c-0de0-4623-977c-96546b52eeee 11:53:00 policy-pap | [2025-06-16T11:49:23.251+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:49:23.251+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:49:26.994+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 11:53:00 policy-pap | [2025-06-16T11:49:52.844+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] 11:53:00 policy-pap | [2025-06-16T11:49:52.947+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] 11:53:00 policy-pap | [2025-06-16T11:50:22.266+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:50:22.272+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:50:22.278+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:00 policy-pap | [2025-06-16T11:50:40.257+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:50:40.258+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-6] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 11:53:00 policy-pap | [2025-06-16T11:50:40.259+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering a deploy for policy zoneB 1.0.6 11:53:00 policy-pap | [2025-06-16T11:50:40.260+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 11:53:00 policy-pap | [2025-06-16T11:50:40.261+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:50:40.261+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:50:40.276+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T11:50:40Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:50:40.305+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] 11:53:00 policy-pap | [2025-06-16T11:50:40.305+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:50:40.312+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:50:40.312+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:50:40.314+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:50:40.314+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:50:40.355+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:50:40.358+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:50:40.358+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8e087ef1-2fd0-46b9-9508-582ab8231512 11:53:00 policy-pap | [2025-06-16T11:50:40.367+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:50:40.367+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:50:40.368+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:00 policy-pap | [2025-06-16T11:51:04.786+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 11:53:00 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 11:53:00 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 11:53:00 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:04.798+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:04Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=81b29182-51c9-4f5a-a7a1-52cae730ca23, expireMs=1750074694809] 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:04.826+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:04.827+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:04.834+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:04.835+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:04.838+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:04.838+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 81b29182-51c9-4f5a-a7a1-52cae730ca23 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=81b29182-51c9-4f5a-a7a1-52cae730ca23, expireMs=1750074694809] 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:00 policy-pap | [2025-06-16T11:51:05.196+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:05.198+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null 11:53:00 policy-pap | [2025-06-16T11:51:05.199+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed 11:53:00 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:00 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:00 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:00 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:00 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:00 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:00 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:00 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:00 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:00 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:00 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:00 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:00 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:00 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:00 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:00 policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-1] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 11:53:00 policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy vehicle 1.0.6 11:53:00 policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 11:53:00 policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:05.907+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T11:51:05Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=242d1125-bfd6-47d9-a88c-f3dec38b8930, expireMs=1750074695914] 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:05.922+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:05.922+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:05.923+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:05.923+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:05.963+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=242d1125-bfd6-47d9-a88c-f3dec38b8930, expireMs=1750074695914] 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:51:05.967+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:05.968+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 242d1125-bfd6-47d9-a88c-f3dec38b8930 11:53:00 policy-pap | [2025-06-16T11:51:05.973+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:51:05.974+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:51:05.974+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:00 policy-pap | [2025-06-16T11:51:10.305+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] 11:53:00 policy-pap | [2025-06-16T11:51:22.931+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:22.931+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:22.932+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:00 policy-pap | [2025-06-16T11:51:27.004+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 11:53:00 policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 11:53:00 policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 11:53:00 policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 11:53:00 policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:30.291+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:30Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:51:30.300+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] 11:53:00 policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] 11:53:00 policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:51:30.303+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:30.321+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:30.321+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ac6fa7ae-3295-4484-b921-15eb49f2a5f5 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:00 policy-pap | [2025-06-16T11:51:30.681+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:30.682+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-3] failed to undeploy policy: vehicle null 11:53:00 policy-pap | [2025-06-16T11:51:30.682+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-3] undeploy policy failed 11:53:00 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:00 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:00 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:00 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:00 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:00 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:00 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:00 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:00 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:00 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:00 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:00 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:00 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:00 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:00 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:31.350+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-16T11:51:31Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=db176b33-7fa1-414d-893a-c54fbbea91ea, expireMs=1750074721356] 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:51:31.357+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:31.394+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:31.394+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id db176b33-7fa1-414d-893a-c54fbbea91ea 11:53:00 policy-pap | [2025-06-16T11:51:31.395+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=db176b33-7fa1-414d-893a-c54fbbea91ea, expireMs=1750074721356] 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:00 policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 11:53:00 policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy abac 1.0.7 11:53:00 policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 11:53:00 policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:55.984+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:55Z, user=policyadmin)] 11:53:00 policy-pap | [2025-06-16T11:51:55.991+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=6125c77a-eecc-44d2-a582-c2c1c7662698, expireMs=1750074745992] 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started 11:53:00 policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:55.997+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:55.997+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:56.002+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:00 policy-pap | [2025-06-16T11:51:56.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:00 policy-pap | [2025-06-16T11:51:56.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:56.011+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6125c77a-eecc-44d2-a582-c2c1c7662698 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:00 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6125c77a-eecc-44d2-a582-c2c1c7662698, expireMs=1750074745992] 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener 11:53:00 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped 11:53:00 policy-pap | [2025-06-16T11:51:56.020+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful 11:53:00 policy-pap | [2025-06-16T11:51:56.020+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests 11:53:00 policy-pap | [2025-06-16T11:51:56.021+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:00 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:00 policy-pap | [2025-06-16T11:51:56.298+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 11:53:00 policy-pap | [2025-06-16T11:51:56.298+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: abac null 11:53:00 policy-pap | [2025-06-16T11:51:56.298+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed 11:53:00 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:00 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:00 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:00 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:00 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:00 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:00 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:00 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:00 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:00 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:00 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:00 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:00 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:00 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:00 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:00 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:00 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:00 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:00 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:00 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:00 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:00 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:00 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:00 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:00 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:00 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:00 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:00 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:00 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:00 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:00 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:00 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:00 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:00 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:00 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:00 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:00 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:00 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:00 policy-pap | [2025-06-16T11:52:00.301+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] 11:53:00 postgres | The files belonging to this database system will be owned by user "postgres". 11:53:00 postgres | This user must also own the server process. 11:53:00 postgres | 11:53:00 postgres | The database cluster will be initialized with locale "en_US.utf8". 11:53:00 postgres | The default database encoding has accordingly been set to "UTF8". 11:53:00 postgres | The default text search configuration will be set to "english". 11:53:00 postgres | 11:53:00 postgres | Data page checksums are disabled. 11:53:00 postgres | 11:53:00 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 11:53:00 postgres | creating subdirectories ... ok 11:53:00 postgres | selecting dynamic shared memory implementation ... posix 11:53:00 postgres | selecting default max_connections ... 100 11:53:00 postgres | selecting default shared_buffers ... 128MB 11:53:00 postgres | selecting default time zone ... Etc/UTC 11:53:00 postgres | creating configuration files ... ok 11:53:00 postgres | running bootstrap script ... ok 11:53:00 postgres | performing post-bootstrap initialization ... ok 11:53:00 postgres | syncing data to disk ... ok 11:53:00 postgres | 11:53:00 postgres | 11:53:00 postgres | Success. You can now start the database server using: 11:53:00 postgres | 11:53:00 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 11:53:00 postgres | 11:53:00 postgres | initdb: warning: enabling "trust" authentication for local connections 11:53:00 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 11:53:00 postgres | waiting for server to start....2025-06-16 11:46:49.496 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 11:53:00 postgres | 2025-06-16 11:46:49.497 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 11:53:00 postgres | 2025-06-16 11:46:49.502 UTC [52] LOG: database system was shut down at 2025-06-16 11:46:49 UTC 11:53:00 postgres | 2025-06-16 11:46:49.505 UTC [49] LOG: database system is ready to accept connections 11:53:00 postgres | done 11:53:00 postgres | server started 11:53:00 postgres | 11:53:00 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 11:53:00 postgres | 11:53:00 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 11:53:00 postgres | #!/bin/bash -xv 11:53:00 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 11:53:00 postgres | # 11:53:00 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 11:53:00 postgres | # you may not use this file except in compliance with the License. 11:53:00 postgres | # You may obtain a copy of the License at 11:53:00 postgres | # 11:53:00 postgres | # http://www.apache.org/licenses/LICENSE-2.0 11:53:00 postgres | # 11:53:00 postgres | # Unless required by applicable law or agreed to in writing, software 11:53:00 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 11:53:00 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11:53:00 postgres | # See the License for the specific language governing permissions and 11:53:00 postgres | # limitations under the License. 11:53:00 postgres | 11:53:00 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 11:53:00 postgres | CREATE ROLE 11:53:00 postgres | 11:53:00 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | do 11:53:00 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 11:53:00 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 11:53:00 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 11:53:00 postgres | done 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 11:53:00 postgres | GRANT 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 11:53:00 postgres | GRANT 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 11:53:00 postgres | GRANT 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 11:53:00 postgres | GRANT 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 11:53:00 postgres | GRANT 11:53:00 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:00 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 11:53:00 postgres | CREATE DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 11:53:00 postgres | ALTER DATABASE 11:53:00 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 11:53:00 postgres | GRANT 11:53:00 postgres | 11:53:00 postgres | waiting for server to shut down....2025-06-16 11:46:50.978 UTC [49] LOG: received fast shutdown request 11:53:00 postgres | 2025-06-16 11:46:50.981 UTC [49] LOG: aborting any active transactions 11:53:00 postgres | 2025-06-16 11:46:50.985 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 11:53:00 postgres | 2025-06-16 11:46:50.985 UTC [50] LOG: shutting down 11:53:00 postgres | 2025-06-16 11:46:50.987 UTC [50] LOG: checkpoint starting: shutdown immediate 11:53:00 postgres | 2025-06-16 11:46:51.413 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.335 s, sync=0.085 s, total=0.428 s; sync files=1788, longest=0.010 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 11:53:00 postgres | 2025-06-16 11:46:51.424 UTC [49] LOG: database system is shut down 11:53:00 postgres | done 11:53:00 postgres | server stopped 11:53:00 postgres | 11:53:00 postgres | PostgreSQL init process complete; ready for start up. 11:53:00 postgres | 11:53:00 postgres | 2025-06-16 11:46:51.504 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 11:53:00 postgres | 2025-06-16 11:46:51.505 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 11:53:00 postgres | 2025-06-16 11:46:51.505 UTC [1] LOG: listening on IPv6 address "::", port 5432 11:53:00 postgres | 2025-06-16 11:46:51.507 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 11:53:00 postgres | 2025-06-16 11:46:51.514 UTC [102] LOG: database system was shut down at 2025-06-16 11:46:51 UTC 11:53:00 postgres | 2025-06-16 11:46:51.520 UTC [1] LOG: database system is ready to accept connections 11:53:00 postgres | 2025-06-16 11:51:51.582 UTC [100] LOG: checkpoint starting: time 11:53:00 postgres | 2025-06-16 11:52:56.484 UTC [100] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.874 s, sync=0.021 s, total=64.902 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/31502E0, redo lsn=0/314DDE0 11:53:00 prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 11:53:00 prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 11:53:00 prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 11:53:00 prometheus | time=2025-06-16T11:46:49.653Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 11:53:00 prometheus | time=2025-06-16T11:46:49.655Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 11:53:00 prometheus | time=2025-06-16T11:46:49.657Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 11:53:00 prometheus | time=2025-06-16T11:46:49.663Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 11:53:00 prometheus | time=2025-06-16T11:46:49.663Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 11:53:00 prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 11:53:00 prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.04µs 11:53:00 prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 11:53:00 prometheus | time=2025-06-16T11:46:49.665Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=327.165µs 11:53:00 prometheus | time=2025-06-16T11:46:49.665Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=46.291µs wal_replay_duration=358.366µs wbl_replay_duration=210ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.04µs total_replay_duration=537.429µs 11:53:00 prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 11:53:00 prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1290 msg="TSDB started" 11:53:00 prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 11:53:00 prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 11:53:00 prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.81µs remote_storage=2.69µs web_handler=710ns query_engine=1.28µs scrape=280.214µs scrape_sd=258.445µs notify=152.912µs notify_sd=53.091µs rules=1.84µs tracing=6.33µs filename=/etc/prometheus/prometheus.yml totalDuration=1.684337ms 11:53:00 prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 11:53:00 prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 11:53:00 zookeeper | ===> User 11:53:00 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:53:00 zookeeper | ===> Configuring ... 11:53:00 zookeeper | ===> Running preflight checks ... 11:53:00 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 11:53:00 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 11:53:00 zookeeper | ===> Launching ... 11:53:00 zookeeper | ===> Launching zookeeper ... 11:53:00 zookeeper | [2025-06-16 11:46:50,463] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,466] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,466] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,466] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,466] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,469] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:00 zookeeper | [2025-06-16 11:46:50,469] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:00 zookeeper | [2025-06-16 11:46:50,469] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:00 zookeeper | [2025-06-16 11:46:50,469] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 11:53:00 zookeeper | [2025-06-16 11:46:50,470] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 11:53:00 zookeeper | [2025-06-16 11:46:50,470] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,471] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,471] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,471] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,471] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:00 zookeeper | [2025-06-16 11:46:50,471] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 11:53:00 zookeeper | [2025-06-16 11:46:50,485] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 11:53:00 zookeeper | [2025-06-16 11:46:50,487] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:53:00 zookeeper | [2025-06-16 11:46:50,487] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:53:00 zookeeper | [2025-06-16 11:46:50,489] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,503] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,506] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 11:53:00 zookeeper | [2025-06-16 11:46:50,507] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,507] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,513] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:53:00 zookeeper | [2025-06-16 11:46:50,513] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:00 zookeeper | [2025-06-16 11:46:50,517] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,517] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,517] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 11:53:00 zookeeper | [2025-06-16 11:46:50,517] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 11:53:00 zookeeper | [2025-06-16 11:46:50,518] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,540] INFO Logging initialized @403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 11:53:00 zookeeper | [2025-06-16 11:46:50,601] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 11:53:00 zookeeper | [2025-06-16 11:46:50,602] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 11:53:00 zookeeper | [2025-06-16 11:46:50,623] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 11:53:00 zookeeper | [2025-06-16 11:46:50,669] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 11:53:00 zookeeper | [2025-06-16 11:46:50,669] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 11:53:00 zookeeper | [2025-06-16 11:46:50,670] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 11:53:00 zookeeper | [2025-06-16 11:46:50,673] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 11:53:00 zookeeper | [2025-06-16 11:46:50,681] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 11:53:00 zookeeper | [2025-06-16 11:46:50,691] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 11:53:00 zookeeper | [2025-06-16 11:46:50,691] INFO Started @558ms (org.eclipse.jetty.server.Server) 11:53:00 zookeeper | [2025-06-16 11:46:50,691] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,694] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,695] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,696] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,696] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,721] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,721] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:53:00 zookeeper | [2025-06-16 11:46:50,722] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 11:53:00 zookeeper | [2025-06-16 11:46:50,722] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 11:53:00 zookeeper | [2025-06-16 11:46:50,726] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 11:53:00 zookeeper | [2025-06-16 11:46:50,727] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:00 zookeeper | [2025-06-16 11:46:50,729] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 11:53:00 zookeeper | [2025-06-16 11:46:50,730] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:00 zookeeper | [2025-06-16 11:46:50,730] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:00 zookeeper | [2025-06-16 11:46:50,736] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 11:53:00 zookeeper | [2025-06-16 11:46:50,736] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 11:53:00 zookeeper | [2025-06-16 11:46:50,752] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 11:53:00 zookeeper | [2025-06-16 11:46:50,753] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 11:53:00 zookeeper | [2025-06-16 11:46:51,807] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 11:53:00 Tearing down containers... 11:53:00 Container policy-csit Stopping 11:53:00 Container grafana Stopping 11:53:00 Container policy-opa-pdp Stopping 11:53:00 Container policy-csit Stopped 11:53:00 Container policy-csit Removing 11:53:00 Container policy-csit Removed 11:53:01 Container grafana Stopped 11:53:01 Container grafana Removing 11:53:01 Container grafana Removed 11:53:01 Container prometheus Stopping 11:53:01 Container prometheus Stopped 11:53:01 Container prometheus Removing 11:53:01 Container prometheus Removed 11:53:10 Container policy-opa-pdp Stopped 11:53:10 Container policy-opa-pdp Removing 11:53:10 Container policy-opa-pdp Removed 11:53:10 Container policy-pap Stopping 11:53:21 Container policy-pap Stopped 11:53:21 Container policy-pap Removing 11:53:21 Container policy-pap Removed 11:53:21 Container policy-api Stopping 11:53:21 Container kafka Stopping 11:53:22 Container kafka Stopped 11:53:22 Container kafka Removing 11:53:22 Container kafka Removed 11:53:22 Container zookeeper Stopping 11:53:22 Container zookeeper Stopped 11:53:22 Container zookeeper Removing 11:53:22 Container zookeeper Removed 11:53:31 Container policy-api Stopped 11:53:31 Container policy-api Removing 11:53:31 Container policy-api Removed 11:53:31 Container policy-db-migrator Stopping 11:53:31 Container policy-db-migrator Stopped 11:53:31 Container policy-db-migrator Removing 11:53:31 Container policy-db-migrator Removed 11:53:31 Container postgres Stopping 11:53:32 Container postgres Stopped 11:53:32 Container postgres Removing 11:53:32 Container postgres Removed 11:53:32 Network compose_default Removing 11:53:32 Network compose_default Removed 11:53:32 $ ssh-agent -k 11:53:32 unset SSH_AUTH_SOCK; 11:53:32 unset SSH_AGENT_PID; 11:53:32 echo Agent pid 2075 killed; 11:53:32 [ssh-agent] Stopped. 11:53:32 Robot results publisher started... 11:53:32 INFO: Checking test criticality is deprecated and will be dropped in a future release! 11:53:32 -Parsing output xml: 11:53:32 Done! 11:53:32 -Copying log files to build dir: 11:53:33 Done! 11:53:33 -Assigning results to build: 11:53:33 Done! 11:53:33 -Checking thresholds: 11:53:33 Done! 11:53:33 Done publishing Robot results. 11:53:33 [PostBuildScript] - [INFO] Executing post build scripts. 11:53:33 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins17740530698788201198.sh 11:53:33 ---> sysstat.sh 11:53:33 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins11928958053683763973.sh 11:53:33 ---> package-listing.sh 11:53:33 ++ tr '[:upper:]' '[:lower:]' 11:53:33 ++ facter osfamily 11:53:33 + OS_FAMILY=debian 11:53:33 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp 11:53:33 + START_PACKAGES=/tmp/packages_start.txt 11:53:33 + END_PACKAGES=/tmp/packages_end.txt 11:53:33 + DIFF_PACKAGES=/tmp/packages_diff.txt 11:53:33 + PACKAGES=/tmp/packages_start.txt 11:53:33 + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' 11:53:33 + PACKAGES=/tmp/packages_end.txt 11:53:33 + case "${OS_FAMILY}" in 11:53:33 + dpkg -l 11:53:33 + grep '^ii' 11:53:33 + '[' -f /tmp/packages_start.txt ']' 11:53:33 + '[' -f /tmp/packages_end.txt ']' 11:53:33 + diff /tmp/packages_start.txt /tmp/packages_end.txt 11:53:33 + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' 11:53:33 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ 11:53:33 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ 11:53:33 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins14212807151681641966.sh 11:53:33 ---> capture-instance-metadata.sh 11:53:33 Setup pyenv: 11:53:33 system 11:53:33 3.8.13 11:53:33 3.9.13 11:53:33 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:53:33 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv 11:53:35 lf-activate-venv(): INFO: Installing: lftools 11:53:44 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH 11:53:44 INFO: Running in OpenStack, capturing instance metadata 11:53:44 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2536458372897164201.sh 11:53:44 provisioning config files... 11:53:44 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/config12631572253889805859tmp 11:53:44 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 11:53:44 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 11:53:44 [EnvInject] - Injecting environment variables from a build step. 11:53:44 [EnvInject] - Injecting as environment variables the properties content 11:53:44 SERVER_ID=logs 11:53:44 11:53:44 [EnvInject] - Variables injected successfully. 11:53:44 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins5378522003727158682.sh 11:53:44 ---> create-netrc.sh 11:53:44 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins11660961740779801257.sh 11:53:44 ---> python-tools-install.sh 11:53:44 Setup pyenv: 11:53:44 system 11:53:44 3.8.13 11:53:44 3.9.13 11:53:44 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:53:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv 11:53:46 lf-activate-venv(): INFO: Installing: lftools 11:53:54 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH 11:53:54 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2874718829872972836.sh 11:53:54 ---> sudo-logs.sh 11:53:54 Archiving 'sudo' log.. 11:53:54 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2455489172863964941.sh 11:53:54 ---> job-cost.sh 11:53:54 Setup pyenv: 11:53:54 system 11:53:54 3.8.13 11:53:54 3.9.13 11:53:54 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:53:55 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv 11:53:56 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 11:54:01 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH 11:54:01 INFO: No Stack... 11:54:02 INFO: Retrieving Pricing Info for: v3-standard-8 11:54:02 INFO: Archiving Costs 11:54:02 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash -l /tmp/jenkins8958251358272619223.sh 11:54:02 ---> logs-deploy.sh 11:54:02 Setup pyenv: 11:54:02 system 11:54:02 3.8.13 11:54:02 3.9.13 11:54:02 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:54:02 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv 11:54:04 lf-activate-venv(): INFO: Installing: lftools 11:54:12 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH 11:54:12 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-policy-opa-pdp/179 11:54:12 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 11:54:13 Archives upload complete. 11:54:13 INFO: archiving logs to Nexus 11:54:14 ---> uname -a: 11:54:14 Linux prd-ubuntu1804-docker-8c-8g-21584 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 11:54:14 11:54:14 11:54:14 ---> lscpu: 11:54:14 Architecture: x86_64 11:54:14 CPU op-mode(s): 32-bit, 64-bit 11:54:14 Byte Order: Little Endian 11:54:14 CPU(s): 8 11:54:14 On-line CPU(s) list: 0-7 11:54:14 Thread(s) per core: 1 11:54:14 Core(s) per socket: 1 11:54:14 Socket(s): 8 11:54:14 NUMA node(s): 1 11:54:14 Vendor ID: AuthenticAMD 11:54:14 CPU family: 23 11:54:14 Model: 49 11:54:14 Model name: AMD EPYC-Rome Processor 11:54:14 Stepping: 0 11:54:14 CPU MHz: 2799.998 11:54:14 BogoMIPS: 5599.99 11:54:14 Virtualization: AMD-V 11:54:14 Hypervisor vendor: KVM 11:54:14 Virtualization type: full 11:54:14 L1d cache: 32K 11:54:14 L1i cache: 32K 11:54:14 L2 cache: 512K 11:54:14 L3 cache: 16384K 11:54:14 NUMA node0 CPU(s): 0-7 11:54:14 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 11:54:14 11:54:14 11:54:14 ---> nproc: 11:54:14 8 11:54:14 11:54:14 11:54:14 ---> df -h: 11:54:14 Filesystem Size Used Avail Use% Mounted on 11:54:14 udev 16G 0 16G 0% /dev 11:54:14 tmpfs 3.2G 708K 3.2G 1% /run 11:54:14 /dev/vda1 155G 15G 141G 10% / 11:54:14 tmpfs 16G 0 16G 0% /dev/shm 11:54:14 tmpfs 5.0M 0 5.0M 0% /run/lock 11:54:14 tmpfs 16G 0 16G 0% /sys/fs/cgroup 11:54:14 /dev/vda15 105M 4.4M 100M 5% /boot/efi 11:54:14 tmpfs 3.2G 0 3.2G 0% /run/user/1001 11:54:14 11:54:14 11:54:14 ---> free -m: 11:54:14 total used free shared buff/cache available 11:54:14 Mem: 32167 897 24029 0 7239 30813 11:54:14 Swap: 1023 0 1023 11:54:14 11:54:14 11:54:14 ---> ip addr: 11:54:14 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 11:54:14 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11:54:14 inet 127.0.0.1/8 scope host lo 11:54:14 valid_lft forever preferred_lft forever 11:54:14 inet6 ::1/128 scope host 11:54:14 valid_lft forever preferred_lft forever 11:54:14 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 11:54:14 link/ether fa:16:3e:0f:da:b6 brd ff:ff:ff:ff:ff:ff 11:54:14 inet 10.30.106.89/23 brd 10.30.107.255 scope global dynamic ens3 11:54:14 valid_lft 85803sec preferred_lft 85803sec 11:54:14 inet6 fe80::f816:3eff:fe0f:dab6/64 scope link 11:54:14 valid_lft forever preferred_lft forever 11:54:14 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 11:54:14 link/ether 02:42:14:8a:99:63 brd ff:ff:ff:ff:ff:ff 11:54:14 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 11:54:14 valid_lft forever preferred_lft forever 11:54:14 inet6 fe80::42:14ff:fe8a:9963/64 scope link 11:54:14 valid_lft forever preferred_lft forever 11:54:14 11:54:14 11:54:14 ---> sar -b -r -n DEV: 11:54:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21584) 06/16/25 _x86_64_ (8 CPU) 11:54:14 11:54:14 11:44:20 LINUX RESTART (8 CPU) 11:54:14 11:54:14 11:45:01 tps rtps wtps bread/s bwrtn/s 11:54:14 11:46:01 171.05 37.31 133.74 2922.45 73115.81 11:54:14 11:47:01 734.59 4.88 729.71 493.92 233484.29 11:54:14 11:48:01 29.10 0.07 29.03 3.07 7288.79 11:54:14 11:49:01 4.50 0.00 4.50 0.00 114.51 11:54:14 11:50:01 43.91 0.22 43.69 34.98 7335.30 11:54:14 11:51:01 177.64 0.28 177.35 15.06 26740.88 11:54:14 11:52:01 10.51 0.00 10.51 0.00 239.43 11:54:14 11:53:01 25.41 0.02 25.40 4.27 416.20 11:54:14 11:54:01 54.42 1.27 53.16 89.45 2228.43 11:54:14 Average: 139.19 4.90 134.29 396.58 39054.69 11:54:14 11:54:14 11:45:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:54:14 11:46:01 29802268 31679092 3136952 9.52 77184 2101728 1529496 4.50 902244 1924656 338680 11:54:14 11:47:01 24443096 31003320 8496124 25.79 161688 6487168 6171872 18.16 1782616 6075824 52148 11:54:14 11:48:01 23414264 30072272 9524956 28.92 163576 6586636 7309216 21.51 2793708 6082196 468 11:54:14 11:49:01 23398312 30056652 9540908 28.97 163748 6587180 7534168 22.17 2808484 6082380 368 11:54:14 11:50:01 23070516 29951412 9868704 29.96 176700 6773860 7817260 23.00 2965360 6223076 18456 11:54:14 11:51:01 22698936 29898228 10240284 31.09 204872 7034144 7915936 23.29 3082244 6446800 2240 11:54:14 11:52:01 22687576 29888300 10251644 31.12 205008 7034936 7949304 23.39 3097596 6441560 48 11:54:14 11:53:01 22870640 30029628 10068580 30.57 205260 6997352 7364520 21.67 2971028 6395448 264 11:54:14 11:54:01 24586500 31540316 8352720 25.36 206704 6780748 1619836 4.77 1512112 6202632 11180 11:54:14 Average: 24108012 30457691 8831208 26.81 173860 6264861 6134623 18.05 2435044 5763841 47095 11:54:14 11:54:14 11:45:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:54:14 11:46:01 lo 8.93 8.93 0.86 0.86 0.00 0.00 0.00 0.00 11:54:14 11:46:01 ens3 411.96 304.32 3804.06 28.50 0.00 0.00 0.00 0.00 11:54:14 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:47:01 vethebbbfbd 1.60 1.65 0.16 0.17 0.00 0.00 0.00 0.00 11:54:14 11:47:01 br-1d334709b040 37.76 46.88 2.39 311.36 0.00 0.00 0.00 0.00 11:54:14 11:47:01 vetha721af5 44.31 60.74 6.75 8.52 0.00 0.00 0.00 0.00 11:54:14 11:47:01 lo 6.13 6.13 0.55 0.55 0.00 0.00 0.00 0.00 11:54:14 11:48:01 vethebbbfbd 9.45 8.67 1.18 1.26 0.00 0.00 0.00 0.00 11:54:14 11:48:01 br-1d334709b040 0.37 0.27 0.02 0.02 0.00 0.00 0.00 0.00 11:54:14 11:48:01 vetha721af5 106.15 112.20 21.13 18.22 0.00 0.00 0.00 0.00 11:54:14 11:48:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:54:14 11:49:01 vethebbbfbd 13.03 8.78 1.10 1.23 0.00 0.00 0.00 0.00 11:54:14 11:49:01 br-1d334709b040 0.38 0.22 0.02 0.01 0.00 0.00 0.00 0.00 11:54:14 11:49:01 vetha721af5 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:49:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:54:14 11:50:01 vethebbbfbd 15.76 10.91 1.61 1.61 0.00 0.00 0.00 0.00 11:54:14 11:50:01 br-1d334709b040 0.20 0.27 0.02 0.02 0.00 0.00 0.00 0.00 11:54:14 11:50:01 vetha721af5 100.05 100.56 25.15 11.39 0.00 0.00 0.00 0.00 11:54:14 11:50:01 lo 2.17 2.17 0.18 0.18 0.00 0.00 0.00 0.00 11:54:14 11:51:01 vethebbbfbd 14.45 9.77 1.37 1.42 0.00 0.00 0.00 0.00 11:54:14 11:51:01 br-1d334709b040 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:51:01 vetha721af5 165.56 166.39 40.93 18.17 0.00 0.00 0.00 0.00 11:54:14 11:51:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:54:14 11:52:01 vethebbbfbd 17.85 13.35 2.19 2.00 0.00 0.00 0.00 0.00 11:54:14 11:52:01 br-1d334709b040 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:52:01 vetha721af5 683.85 686.92 166.11 73.93 0.00 0.00 0.00 0.01 11:54:14 11:52:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:54:14 11:53:01 vethebbbfbd 13.71 9.08 1.15 1.29 0.00 0.00 0.00 0.00 11:54:14 11:53:01 br-1d334709b040 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:53:01 vetha721af5 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 11:54:14 11:53:01 lo 3.60 3.60 0.31 0.31 0.00 0.00 0.00 0.00 11:54:14 11:54:01 lo 0.47 0.47 0.05 0.05 0.00 0.00 0.00 0.00 11:54:14 11:54:01 ens3 2166.29 1318.51 37438.79 195.03 0.00 0.00 0.00 0.00 11:54:14 11:54:01 docker0 118.56 179.80 7.74 1349.26 0.00 0.00 0.00 0.00 11:54:14 Average: lo 2.95 2.95 0.26 0.26 0.00 0.00 0.00 0.00 11:54:14 Average: ens3 204.29 121.44 4075.16 13.84 0.00 0.00 0.00 0.00 11:54:14 Average: docker0 13.20 20.02 0.86 150.20 0.00 0.00 0.00 0.00 11:54:14 11:54:14 11:54:14 ---> sar -P ALL: 11:54:14 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21584) 06/16/25 _x86_64_ (8 CPU) 11:54:14 11:54:14 11:44:20 LINUX RESTART (8 CPU) 11:54:14 11:54:14 11:45:01 CPU %user %nice %system %iowait %steal %idle 11:54:14 11:46:01 all 9.30 0.00 0.85 4.47 0.04 85.34 11:54:14 11:46:01 0 21.56 0.00 1.87 2.54 0.10 73.93 11:54:14 11:46:01 1 3.19 0.00 0.64 4.35 0.03 91.79 11:54:14 11:46:01 2 6.87 0.00 0.69 0.37 0.03 92.05 11:54:14 11:46:01 3 0.95 0.00 0.28 0.17 0.02 98.58 11:54:14 11:46:01 4 6.44 0.00 0.72 4.52 0.03 88.28 11:54:14 11:46:01 5 3.01 0.00 0.38 21.38 0.03 75.19 11:54:14 11:46:01 6 13.63 0.00 1.10 0.93 0.03 84.30 11:54:14 11:46:01 7 18.79 0.00 1.17 1.51 0.05 78.48 11:54:14 11:47:01 all 18.73 0.00 7.52 12.66 0.08 61.01 11:54:14 11:47:01 0 18.64 0.00 7.04 4.41 0.08 69.82 11:54:14 11:47:01 1 18.30 0.00 7.86 15.64 0.07 58.14 11:54:14 11:47:01 2 19.82 0.00 7.45 11.22 0.08 61.43 11:54:14 11:47:01 3 18.72 0.00 7.98 30.29 0.08 42.93 11:54:14 11:47:01 4 18.53 0.00 6.76 12.11 0.10 62.49 11:54:14 11:47:01 5 18.21 0.00 7.50 6.65 0.07 67.57 11:54:14 11:47:01 6 17.39 0.00 7.93 15.37 0.07 59.24 11:54:14 11:47:01 7 20.26 0.00 7.63 5.74 0.07 66.30 11:54:14 11:48:01 all 19.23 0.00 1.68 0.28 0.07 78.74 11:54:14 11:48:01 0 19.82 0.00 1.75 0.07 0.05 78.31 11:54:14 11:48:01 1 23.47 0.00 1.97 0.67 0.07 73.82 11:54:14 11:48:01 2 23.97 0.00 1.76 0.12 0.07 74.09 11:54:14 11:48:01 3 16.51 0.00 1.49 0.69 0.07 81.24 11:54:14 11:48:01 4 20.00 0.00 1.71 0.02 0.07 78.20 11:54:14 11:48:01 5 17.61 0.00 1.72 0.07 0.07 80.54 11:54:14 11:48:01 6 18.59 0.00 1.44 0.10 0.08 79.79 11:54:14 11:48:01 7 13.90 0.00 1.55 0.50 0.05 83.99 11:54:14 11:49:01 all 0.70 0.00 0.14 0.02 0.03 99.11 11:54:14 11:49:01 0 0.78 0.00 0.12 0.00 0.03 99.07 11:54:14 11:49:01 1 0.85 0.00 0.12 0.00 0.02 99.02 11:54:14 11:49:01 2 0.55 0.00 0.10 0.02 0.03 99.30 11:54:14 11:49:01 3 0.28 0.00 0.08 0.00 0.02 99.61 11:54:14 11:49:01 4 1.15 0.00 0.23 0.00 0.07 98.55 11:54:14 11:49:01 5 0.55 0.00 0.17 0.02 0.03 99.23 11:54:14 11:49:01 6 0.65 0.00 0.20 0.00 0.03 99.12 11:54:14 11:49:01 7 0.77 0.00 0.17 0.10 0.03 98.93 11:54:14 11:50:01 all 3.36 0.00 0.77 0.23 0.04 95.60 11:54:14 11:50:01 0 2.97 0.00 0.59 0.00 0.03 96.40 11:54:14 11:50:01 1 3.64 0.00 0.88 1.04 0.05 94.38 11:54:14 11:50:01 2 2.58 0.00 0.58 0.03 0.03 96.77 11:54:14 11:50:01 3 2.86 0.00 0.72 0.14 0.03 96.25 11:54:14 11:50:01 4 2.89 0.00 1.09 0.12 0.07 95.84 11:54:14 11:50:01 5 5.06 0.00 0.73 0.00 0.05 94.16 11:54:14 11:50:01 6 2.62 0.00 0.56 0.02 0.05 96.76 11:54:14 11:50:01 7 4.23 0.00 0.95 0.46 0.05 94.31 11:54:14 11:51:01 all 7.35 0.00 1.89 1.36 0.07 89.32 11:54:14 11:51:01 0 4.97 0.00 1.66 2.75 0.05 90.57 11:54:14 11:51:01 1 13.70 0.00 2.38 1.06 0.08 82.78 11:54:14 11:51:01 2 10.75 0.00 1.83 0.18 0.05 87.19 11:54:14 11:51:01 3 3.60 0.00 1.76 2.03 0.07 92.54 11:54:14 11:51:01 4 7.14 0.00 2.33 0.07 0.07 90.40 11:54:14 11:51:01 5 4.45 0.00 0.97 0.03 0.07 94.48 11:54:14 11:51:01 6 10.59 0.00 2.49 0.27 0.07 86.58 11:54:14 11:51:01 7 3.59 0.00 1.78 4.53 0.08 90.02 11:54:14 11:52:01 all 3.61 0.00 0.62 0.05 0.05 95.66 11:54:14 11:52:01 0 3.65 0.00 0.40 0.03 0.05 95.86 11:54:14 11:52:01 1 2.87 0.00 0.60 0.00 0.03 96.50 11:54:14 11:52:01 2 3.16 0.00 0.42 0.25 0.05 96.12 11:54:14 11:52:01 3 3.24 0.00 0.70 0.00 0.07 95.99 11:54:14 11:52:01 4 4.36 0.00 0.55 0.02 0.07 95.01 11:54:14 11:52:01 5 4.75 0.00 0.60 0.02 0.07 94.57 11:54:14 11:52:01 6 3.56 0.00 0.60 0.03 0.05 95.76 11:54:14 11:52:01 7 3.31 0.00 1.09 0.03 0.05 95.52 11:54:14 11:53:01 all 1.45 0.00 0.42 0.06 0.04 98.03 11:54:14 11:53:01 0 1.15 0.00 0.43 0.00 0.05 98.36 11:54:14 11:53:01 1 1.40 0.00 0.40 0.00 0.03 98.17 11:54:14 11:53:01 2 1.02 0.00 0.45 0.03 0.03 98.46 11:54:14 11:53:01 3 1.74 0.00 0.40 0.02 0.05 97.80 11:54:14 11:53:01 4 1.25 0.00 0.45 0.02 0.07 98.21 11:54:14 11:53:01 5 1.19 0.00 0.35 0.22 0.03 98.21 11:54:14 11:53:01 6 1.84 0.00 0.42 0.00 0.05 97.70 11:54:14 11:53:01 7 2.03 0.00 0.42 0.20 0.03 97.32 11:54:14 11:54:01 all 5.80 0.00 0.66 0.21 0.03 93.29 11:54:14 11:54:01 0 3.35 0.00 0.62 0.05 0.03 95.95 11:54:14 11:54:01 1 0.58 0.00 0.33 0.07 0.02 99.00 11:54:14 11:54:01 2 1.39 0.00 0.48 0.12 0.07 97.95 11:54:14 11:54:01 3 13.28 0.00 0.73 0.20 0.03 85.75 11:54:14 11:54:01 4 0.80 0.00 0.43 0.05 0.02 98.70 11:54:14 11:54:01 5 9.72 0.00 0.92 0.08 0.03 89.25 11:54:14 11:54:01 6 16.39 0.00 1.18 0.07 0.05 82.31 11:54:14 11:54:01 7 0.92 0.00 0.58 1.07 0.02 97.41 11:54:14 Average: all 7.72 0.00 1.61 2.14 0.05 88.48 11:54:14 Average: 0 8.54 0.00 1.61 1.09 0.05 88.71 11:54:14 Average: 1 7.55 0.00 1.68 2.52 0.04 88.21 11:54:14 Average: 2 7.78 0.00 1.52 1.37 0.05 89.28 11:54:14 Average: 3 6.78 0.00 1.56 3.69 0.05 87.92 11:54:14 Average: 4 6.94 0.00 1.58 1.87 0.06 89.54 11:54:14 Average: 5 7.17 0.00 1.48 3.17 0.05 88.13 11:54:14 Average: 6 9.47 0.00 1.76 1.85 0.05 86.86 11:54:14 Average: 7 7.52 0.00 1.70 1.57 0.05 89.17 11:54:14 11:54:14 11:54:14