Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22474 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-jlVdb585OSnm/agent.2108 SSH_AGENT_PID=2110 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_13891817379203038896.key (/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_13891817379203038896.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 Commit message: "Add missing delete composition in CSIT" > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins13220850809846249556.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xS1v lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-xS1v/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-xS1v/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.40 botocore==1.38.40 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh /tmp/jenkins5499028352101191738.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh -xe /tmp/jenkins9015039759691279015.sh + /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 34 60.2M 34 20.6M 0 0 43.6M 0 0:00:01 --:--:-- 0:00:01 43.6M 100 60.2M 100 60.2M 0 0 67.3M 0 --:--:-- --:--:-- --:--:-- 93.8M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp using postgres + Grafana/Prometheus api Pulling grafana Pulling policy-db-migrator Pulling pap Pulling prometheus Pulling zookeeper Pulling kafka Pulling drools-pdp Pulling postgres Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer e5d7009d9e55 Waiting 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer 6d64908bb8c7 Pulling fs layer 739d956095f0 Pulling fs layer 6ce075c32df1 Pulling fs layer 123d8160bc76 Pulling fs layer 6ff3b4b08cc9 Pulling fs layer be48959ad93c Pulling fs layer c70684a5e2f9 Pulling fs layer 739d956095f0 Waiting 6ce075c32df1 Waiting 123d8160bc76 Waiting 6ff3b4b08cc9 Waiting c70684a5e2f9 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB be48959ad93c Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 01e0882c90d9 Waiting 7e568a0dc8fb Pulling fs layer 12c5c803443f Waiting 531ee2cf3c0c Waiting e27c75a98748 Waiting ed54a7dee1d8 Waiting e73cb4a42719 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 46eab5b44a35 Waiting 2d429b9e73a6 Waiting c4d302cc468d Waiting 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 807a2e881ecd Waiting f18232174bc9 Waiting 9183b65e90ee Waiting 3f8d5c908dcc Waiting 30bb92ff0608 Waiting 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 4a4d0948b0bf Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 538deb30e80c Waiting 7009d5001b77 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 1e017ebebdbd Waiting 55f2b468da67 Waiting 09d5a3f70313 Pulling fs layer 82bfc142787e Waiting 356f5c2c843b Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting e040ea11fa10 Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 44986281b8b9 Waiting 1ccde423731d Waiting f3b09c502777 Waiting 408012a7b118 Waiting 1617e25568b2 Waiting 6ac0e4adf315 Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting 9fa9226be034 Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer eca0188f477e Waiting 9c266ba63f51 Pulling fs layer e444bcd4d577 Waiting eabd8714fec9 Waiting 8f10199ed94b Waiting 79161a3f5362 Waiting f3a82e9f1761 Waiting 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer 9c266ba63f51 Waiting c955f6e31a04 Pulling fs layer 10f05dd8b1db Waiting 41dac8b43ba6 Waiting da3ed5db7103 Waiting 45fd2fec8a19 Waiting c955f6e31a04 Waiting 71a9f6a9ab4d Waiting da9db072f522 Verifying Checksum da9db072f522 Verifying Checksum da9db072f522 Verifying Checksum da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Download complete 96e38c8865ba Downloading [=======> ] 11.35MB/71.91MB 96e38c8865ba Downloading [=======> ] 11.35MB/71.91MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete da9db072f522 Extracting [============> ] 917.5kB/3.624MB da9db072f522 Extracting [============> ] 917.5kB/3.624MB da9db072f522 Extracting [============> ] 917.5kB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB dcc0c3b2850c Downloading [======> ] 10.27MB/76.12MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 96e38c8865ba Downloading [==================> ] 25.95MB/71.91MB 96e38c8865ba Downloading [==================> ] 25.95MB/71.91MB c124ba1a8b26 Downloading [==> ] 4.324MB/91.87MB dcc0c3b2850c Downloading [=============> ] 21.09MB/76.12MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [============================> ] 41.63MB/71.91MB 96e38c8865ba Downloading [============================> ] 41.63MB/71.91MB da9db072f522 Already exists 2d1ceb071048 Pulling fs layer b967ca84731b Pulling fs layer 7ce54b4fc536 Pulling fs layer 7ab7b54fb86b Pulling fs layer 57bb65838ce6 Pulling fs layer b967ca84731b Waiting 7ab7b54fb86b Waiting 2d1ceb071048 Waiting 7ce54b4fc536 Waiting c124ba1a8b26 Downloading [======> ] 11.35MB/91.87MB dcc0c3b2850c Downloading [======================> ] 34.6MB/76.12MB 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB c124ba1a8b26 Downloading [===========> ] 20.54MB/91.87MB dcc0c3b2850c Downloading [=================================> ] 50.82MB/76.12MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Download complete 6d64908bb8c7 Downloading [> ] 539.6kB/71.86MB c124ba1a8b26 Downloading [=================> ] 31.36MB/91.87MB dcc0c3b2850c Downloading [==========================================> ] 65.42MB/76.12MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 6d64908bb8c7 Downloading [===> ] 4.865MB/71.86MB c124ba1a8b26 Downloading [=========================> ] 47.04MB/91.87MB 739d956095f0 Downloading [> ] 146.4kB/14.64MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 6d64908bb8c7 Downloading [=======> ] 10.81MB/71.86MB c124ba1a8b26 Downloading [===================================> ] 64.34MB/91.87MB 739d956095f0 Downloading [=======> ] 2.063MB/14.64MB 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 6d64908bb8c7 Downloading [===============> ] 21.63MB/71.86MB c124ba1a8b26 Downloading [============================================> ] 81.64MB/91.87MB 739d956095f0 Downloading [======================> ] 6.634MB/14.64MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 6ce075c32df1 Downloading [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Download complete 739d956095f0 Verifying Checksum 739d956095f0 Download complete 6d64908bb8c7 Downloading [========================> ] 34.6MB/71.86MB 123d8160bc76 Downloading [============================> ] 3.003kB/5.239kB 123d8160bc76 Downloading [==================================================>] 5.239kB/5.239kB 123d8160bc76 Download complete 6ff3b4b08cc9 Downloading [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Verifying Checksum 6ff3b4b08cc9 Download complete be48959ad93c Downloading [==================================================>] 1.033kB/1.033kB be48959ad93c Verifying Checksum be48959ad93c Download complete c70684a5e2f9 Downloading [=======> ] 3.002kB/19.52kB c70684a5e2f9 Downloading [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Verifying Checksum c70684a5e2f9 Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 6d64908bb8c7 Downloading [===================================> ] 51.36MB/71.86MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 2d429b9e73a6 Downloading [===========> ] 6.487MB/29.13MB 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 6d64908bb8c7 Downloading [===============================================> ] 68.66MB/71.86MB 6d64908bb8c7 Verifying Checksum 6d64908bb8c7 Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 2d429b9e73a6 Downloading [===============================> ] 18.28MB/29.13MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Download complete 531ee2cf3c0c Downloading [======================================> ] 6.143MB/8.066MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 6d64908bb8c7 Extracting [> ] 557.1kB/71.86MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB e73cb4a42719 Downloading [====> ] 10.81MB/109.1MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 6d64908bb8c7 Extracting [===> ] 4.456MB/71.86MB 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 807a2e881ecd Verifying Checksum 807a2e881ecd Download complete 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete 96e38c8865ba Extracting [=============================> ] 41.78MB/71.91MB 96e38c8865ba Extracting [=============================> ] 41.78MB/71.91MB e73cb4a42719 Downloading [===========> ] 25.95MB/109.1MB 04f6155c873d Downloading [> ] 539.6kB/107.3MB 2d429b9e73a6 Extracting [======> ] 3.834MB/29.13MB f18232174bc9 Extracting [=========> ] 720.9kB/3.642MB 6d64908bb8c7 Extracting [=====> ] 7.799MB/71.86MB 30bb92ff0608 Downloading [===================> ] 3.44MB/8.735MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB e73cb4a42719 Downloading [==================> ] 41.09MB/109.1MB 04f6155c873d Downloading [=> ] 2.702MB/107.3MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 2d429b9e73a6 Extracting [============> ] 7.373MB/29.13MB 6d64908bb8c7 Extracting [========> ] 11.7MB/71.86MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete f18232174bc9 Pull complete 9183b65e90ee Extracting [==================================================>] 141B/141B 9183b65e90ee Extracting [==================================================>] 141B/141B e73cb4a42719 Downloading [=========================> ] 55.69MB/109.1MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 96e38c8865ba Extracting [==================================> ] 49.02MB/71.91MB 96e38c8865ba Extracting [==================================> ] 49.02MB/71.91MB 04f6155c873d Downloading [===> ] 7.568MB/107.3MB 2d429b9e73a6 Extracting [=================> ] 10.03MB/29.13MB 6d64908bb8c7 Extracting [==========> ] 15.04MB/71.86MB e73cb4a42719 Downloading [================================> ] 71.37MB/109.1MB 85dde7dceb0a Downloading [=====> ] 6.487MB/63.48MB 04f6155c873d Downloading [=======> ] 15.14MB/107.3MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 6d64908bb8c7 Extracting [============> ] 17.27MB/71.86MB 9183b65e90ee Pull complete 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB e73cb4a42719 Downloading [========================================> ] 88.67MB/109.1MB 85dde7dceb0a Downloading [============> ] 16.22MB/63.48MB 04f6155c873d Downloading [===========> ] 25.41MB/107.3MB 2d429b9e73a6 Extracting [===============================> ] 18.28MB/29.13MB 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 6d64908bb8c7 Extracting [==============> ] 20.61MB/71.86MB 3f8d5c908dcc Extracting [===========> ] 786.4kB/3.524MB e73cb4a42719 Downloading [===============================================> ] 103.3MB/109.1MB 85dde7dceb0a Downloading [=====================> ] 27.03MB/63.48MB 04f6155c873d Downloading [=================> ] 38.39MB/107.3MB 2d429b9e73a6 Extracting [======================================> ] 22.41MB/29.13MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Download complete 6d64908bb8c7 Extracting [=================> ] 24.51MB/71.86MB 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Verifying Checksum 538deb30e80c Download complete 85dde7dceb0a Downloading [===============================> ] 40.01MB/63.48MB 04f6155c873d Downloading [=========================> ] 54.07MB/107.3MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 3f8d5c908dcc Pull complete 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 6d64908bb8c7 Extracting [===================> ] 27.85MB/71.86MB 04f6155c873d Downloading [================================> ] 69.2MB/107.3MB 85dde7dceb0a Downloading [==============================================> ] 58.93MB/63.48MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 2d429b9e73a6 Extracting [=============================================> ] 26.54MB/29.13MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 1e017ebebdbd Downloading [=====> ] 3.767MB/37.19MB 6d64908bb8c7 Extracting [=====================> ] 31.2MB/71.86MB 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB 04f6155c873d Downloading [========================================> ] 85.97MB/107.3MB 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB 1e017ebebdbd Downloading [=================> ] 13.19MB/37.19MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 6d64908bb8c7 Extracting [========================> ] 34.54MB/71.86MB 30bb92ff0608 Extracting [=======================> ] 4.03MB/8.735MB 04f6155c873d Downloading [===============================================> ] 101.1MB/107.3MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 1e017ebebdbd Downloading [===============================> ] 23.74MB/37.19MB 55f2b468da67 Downloading [==> ] 15.14MB/257.9MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 6d64908bb8c7 Extracting [==========================> ] 37.88MB/71.86MB 30bb92ff0608 Extracting [======================================> ] 6.783MB/8.735MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 82bfc142787e Downloading [==================================> ] 5.897MB/8.613MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 55f2b468da67 Downloading [=====> ] 28.65MB/257.9MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 6d64908bb8c7 Extracting [============================> ] 41.22MB/71.86MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 2d429b9e73a6 Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 55f2b468da67 Downloading [========> ] 43.25MB/257.9MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 6d64908bb8c7 Extracting [==============================> ] 44.01MB/71.86MB b0e0ef7895f4 Downloading [====> ] 3.014MB/37.01MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 807a2e881ecd Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 1e017ebebdbd Extracting [=====> ] 4.325MB/37.19MB 55f2b468da67 Downloading [===========> ] 57.31MB/257.9MB 46eab5b44a35 Pull complete 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB c4d302cc468d Extracting [> ] 65.54kB/4.534MB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 6d64908bb8c7 Extracting [=================================> ] 48.46MB/71.86MB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB b0e0ef7895f4 Downloading [========> ] 6.028MB/37.01MB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 1e017ebebdbd Extracting [==========> ] 7.864MB/37.19MB 55f2b468da67 Downloading [=============> ] 70.29MB/257.9MB c4d302cc468d Extracting [=================> ] 1.573MB/4.534MB 4a4d0948b0bf Pull complete b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 6d64908bb8c7 Extracting [====================================> ] 51.81MB/71.86MB 1ec5fb03eaee Pull complete 684be6598fc9 Pull complete 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 55f2b468da67 Downloading [===============> ] 81.64MB/257.9MB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09d5a3f70313 Downloading [=> ] 4.324MB/109.2MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB b0e0ef7895f4 Downloading [================> ] 12.06MB/37.01MB c4d302cc468d Pull complete 6d64908bb8c7 Extracting [======================================> ] 55.15MB/71.86MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 55f2b468da67 Downloading [==================> ] 96.78MB/257.9MB 1e017ebebdbd Extracting [=================> ] 12.98MB/37.19MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB d3165a332ae3 Pull complete 09d5a3f70313 Downloading [===> ] 7.028MB/109.2MB 0d92cad902ba Pull complete b0e0ef7895f4 Downloading [==========================> ] 19.59MB/37.01MB 6d64908bb8c7 Extracting [=======================================> ] 57.38MB/71.86MB 55f2b468da67 Downloading [=====================> ] 110.3MB/257.9MB 1e017ebebdbd Extracting [======================> ] 16.52MB/37.19MB 09d5a3f70313 Downloading [=======> ] 16.76MB/109.2MB 04f6155c873d Extracting [=> ] 3.342MB/107.3MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB b0e0ef7895f4 Downloading [======================================> ] 28.26MB/37.01MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 6d64908bb8c7 Extracting [=========================================> ] 59.6MB/71.86MB 55f2b468da67 Downloading [========================> ] 127.1MB/257.9MB 1e017ebebdbd Extracting [==========================> ] 19.66MB/37.19MB 09d5a3f70313 Downloading [===========> ] 25.95MB/109.2MB c124ba1a8b26 Extracting [===> ] 6.128MB/91.87MB 01e0882c90d9 Pull complete 04f6155c873d Extracting [==> ] 5.571MB/107.3MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete dcc0c3b2850c Extracting [====> ] 7.242MB/76.12MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 6d64908bb8c7 Extracting [===========================================> ] 62.39MB/71.86MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 09d5a3f70313 Downloading [================> ] 36.22MB/109.2MB 1e017ebebdbd Extracting [================================> ] 24.38MB/37.19MB c124ba1a8b26 Extracting [=======> ] 13.37MB/91.87MB dcc0c3b2850c Extracting [=========> ] 13.93MB/76.12MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 04f6155c873d Extracting [====> ] 10.03MB/107.3MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 55f2b468da67 Downloading [=============================> ] 154.6MB/257.9MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 6d64908bb8c7 Extracting [=============================================> ] 65.18MB/71.86MB 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 531ee2cf3c0c Extracting [=======================> ] 3.736MB/8.066MB dcc0c3b2850c Extracting [=============> ] 21.17MB/76.12MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 04f6155c873d Extracting [=====> ] 12.26MB/107.3MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 55f2b468da67 Downloading [================================> ] 168.1MB/257.9MB 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB 09d5a3f70313 Downloading [============================> ] 63.26MB/109.2MB 6d64908bb8c7 Extracting [==============================================> ] 67.4MB/71.86MB 1e017ebebdbd Extracting [======================================> ] 28.31MB/37.19MB c124ba1a8b26 Extracting [=============> ] 24.51MB/91.87MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 531ee2cf3c0c Extracting [=============================> ] 4.719MB/8.066MB dcc0c3b2850c Extracting [==================> ] 27.85MB/76.12MB 55f2b468da67 Downloading [===================================> ] 181.1MB/257.9MB 04f6155c873d Extracting [=======> ] 15.04MB/107.3MB 6ac0e4adf315 Downloading [=========> ] 11.89MB/62.07MB 09d5a3f70313 Downloading [==================================> ] 75.69MB/109.2MB 1e017ebebdbd Extracting [=========================================> ] 31.06MB/37.19MB c124ba1a8b26 Extracting [================> ] 30.08MB/91.87MB 6d64908bb8c7 Extracting [================================================> ] 70.19MB/71.86MB 531ee2cf3c0c Extracting [======================================> ] 6.193MB/8.066MB dcc0c3b2850c Extracting [======================> ] 34.54MB/76.12MB 55f2b468da67 Downloading [=====================================> ] 193.6MB/257.9MB 6ac0e4adf315 Downloading [================> ] 20.54MB/62.07MB 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB 1e017ebebdbd Extracting [============================================> ] 33.42MB/37.19MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB c124ba1a8b26 Extracting [===================> ] 35.65MB/91.87MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB dcc0c3b2850c Extracting [==========================> ] 40.11MB/76.12MB 55f2b468da67 Downloading [=======================================> ] 205.5MB/257.9MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 6ac0e4adf315 Downloading [======================> ] 28.11MB/62.07MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09d5a3f70313 Downloading [=============================================> ] 98.4MB/109.2MB c124ba1a8b26 Extracting [=====================> ] 40.11MB/91.87MB 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB 531ee2cf3c0c Pull complete 6d64908bb8c7 Pull complete dcc0c3b2850c Extracting [==============================> ] 46.79MB/76.12MB 55f2b468da67 Downloading [=========================================> ] 215.7MB/257.9MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 739d956095f0 Extracting [> ] 163.8kB/14.64MB 1617e25568b2 Pull complete f3b09c502777 Downloading [> ] 539.6kB/56.52MB c124ba1a8b26 Extracting [==========================> ] 48.46MB/91.87MB 1e017ebebdbd Extracting [=================================================> ] 36.57MB/37.19MB dcc0c3b2850c Extracting [==================================> ] 52.92MB/76.12MB 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB 04f6155c873d Extracting [=========> ] 20.05MB/107.3MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 739d956095f0 Extracting [=> ] 327.7kB/14.64MB c124ba1a8b26 Extracting [===============================> ] 58.49MB/91.87MB f3b09c502777 Downloading [====> ] 4.865MB/56.52MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 1e017ebebdbd Pull complete 55f2b468da67 Downloading [=============================================> ] 236.8MB/257.9MB dcc0c3b2850c Extracting [======================================> ] 57.93MB/76.12MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 04f6155c873d Extracting [==========> ] 23.4MB/107.3MB ed54a7dee1d8 Pull complete 739d956095f0 Extracting [==========> ] 3.113MB/14.64MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete c124ba1a8b26 Extracting [====================================> ] 67.4MB/91.87MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete f3b09c502777 Downloading [==========> ] 11.35MB/56.52MB 55f2b468da67 Downloading [================================================> ] 252MB/257.9MB 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB dcc0c3b2850c Extracting [==========================================> ] 64.62MB/76.12MB 04f6155c873d Extracting [=============> ] 28.41MB/107.3MB 739d956095f0 Extracting [=================> ] 5.243MB/14.64MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB c124ba1a8b26 Extracting [========================================> ] 75.2MB/91.87MB eabd8714fec9 Downloading [> ] 539.6kB/375MB f3b09c502777 Downloading [==============> ] 16.76MB/56.52MB 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 739d956095f0 Extracting [=======================> ] 6.881MB/14.64MB 04f6155c873d Extracting [===============> ] 32.87MB/107.3MB eca0188f477e Downloading [=========> ] 6.782MB/37.17MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB c124ba1a8b26 Extracting [===========================================> ] 80.22MB/91.87MB eabd8714fec9 Downloading [> ] 7.028MB/375MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB f3b09c502777 Downloading [========================> ] 27.57MB/56.52MB 6ac0e4adf315 Extracting [====> ] 6.128MB/62.07MB 04f6155c873d Extracting [================> ] 35.65MB/107.3MB 739d956095f0 Extracting [============================> ] 8.356MB/14.64MB eca0188f477e Downloading [=====================> ] 15.83MB/37.17MB 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB c124ba1a8b26 Extracting [==============================================> ] 86.34MB/91.87MB e27c75a98748 Pull complete eabd8714fec9 Downloading [==> ] 16.22MB/375MB f3b09c502777 Downloading [====================================> ] 41.09MB/56.52MB 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB eb7cda286a15 Pull complete 04f6155c873d Extracting [=================> ] 37.32MB/107.3MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB eca0188f477e Downloading [=================================> ] 25.25MB/37.17MB 739d956095f0 Extracting [=====================================> ] 10.98MB/14.64MB api Pulled 55f2b468da67 Extracting [===> ] 16.71MB/257.9MB f3b09c502777 Downloading [===============================================> ] 53.53MB/56.52MB eabd8714fec9 Downloading [===> ] 27.57MB/375MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 6ac0e4adf315 Extracting [========> ] 10.03MB/62.07MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete e73cb4a42719 Extracting [> ] 557.1kB/109.1MB eca0188f477e Downloading [=================================================> ] 36.93MB/37.17MB eca0188f477e Verifying Checksum eca0188f477e Download complete 739d956095f0 Extracting [========================================> ] 11.96MB/14.64MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 04f6155c873d Extracting [==================> ] 38.99MB/107.3MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB eabd8714fec9 Downloading [=====> ] 38.93MB/375MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 739d956095f0 Extracting [==================================================>] 14.64MB/14.64MB 6394804c2196 Pull complete 8f10199ed94b Downloading [===================> ] 3.341MB/8.768MB e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB 04f6155c873d Extracting [===================> ] 41.22MB/107.3MB pap Pulled eabd8714fec9 Downloading [======> ] 50.28MB/375MB 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB 739d956095f0 Pull complete 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB f3a82e9f1761 Downloading [===> ] 2.751MB/44.41MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 6ac0e4adf315 Extracting [============> ] 15.04MB/62.07MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB eabd8714fec9 Downloading [========> ] 65.96MB/375MB 04f6155c873d Extracting [====================> ] 44.01MB/107.3MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eca0188f477e Extracting [===> ] 2.753MB/37.17MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eabd8714fec9 Downloading [==========> ] 78.94MB/375MB 04f6155c873d Extracting [======================> ] 47.35MB/107.3MB e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 55f2b468da67 Extracting [======> ] 33.42MB/257.9MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB f3a82e9f1761 Downloading [==============> ] 12.84MB/44.41MB 6ac0e4adf315 Extracting [===============> ] 19.5MB/62.07MB eabd8714fec9 Downloading [============> ] 90.83MB/375MB 04f6155c873d Extracting [=======================> ] 49.58MB/107.3MB eca0188f477e Extracting [========> ] 6.291MB/37.17MB e73cb4a42719 Extracting [=====> ] 12.26MB/109.1MB 55f2b468da67 Extracting [========> ] 44.56MB/257.9MB f3a82e9f1761 Downloading [======================> ] 19.73MB/44.41MB da3ed5db7103 Downloading [=> ] 3.243MB/127.4MB 6ac0e4adf315 Extracting [==================> ] 22.84MB/62.07MB eabd8714fec9 Downloading [=============> ] 100.6MB/375MB 04f6155c873d Extracting [=======================> ] 51.25MB/107.3MB eca0188f477e Extracting [===========> ] 8.651MB/37.17MB e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 55f2b468da67 Extracting [=========> ] 49.58MB/257.9MB 6ce075c32df1 Pull complete f3a82e9f1761 Downloading [=============================> ] 26.61MB/44.41MB da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB eabd8714fec9 Downloading [===============> ] 114.6MB/375MB 04f6155c873d Extracting [========================> ] 52.92MB/107.3MB eca0188f477e Extracting [==============> ] 10.62MB/37.17MB e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB 55f2b468da67 Extracting [===========> ] 56.82MB/257.9MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB f3a82e9f1761 Downloading [==========================================> ] 38.08MB/44.41MB da3ed5db7103 Downloading [===> ] 9.731MB/127.4MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eabd8714fec9 Downloading [=================> ] 129.8MB/375MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete 123d8160bc76 Pull complete eca0188f477e Extracting [===================> ] 14.55MB/37.17MB e73cb4a42719 Extracting [==========> ] 23.4MB/109.1MB 04f6155c873d Extracting [=========================> ] 55.15MB/107.3MB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 55f2b468da67 Extracting [============> ] 62.95MB/257.9MB 6ac0e4adf315 Extracting [=======================> ] 29.52MB/62.07MB da3ed5db7103 Downloading [======> ] 15.68MB/127.4MB eabd8714fec9 Downloading [===================> ] 143.8MB/375MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 04f6155c873d Extracting [==========================> ] 57.38MB/107.3MB eca0188f477e Extracting [=======================> ] 17.69MB/37.17MB 55f2b468da67 Extracting [=============> ] 68.52MB/257.9MB da3ed5db7103 Downloading [==========> ] 27.03MB/127.4MB 6ac0e4adf315 Extracting [==========================> ] 32.31MB/62.07MB 6ff3b4b08cc9 Pull complete be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB eabd8714fec9 Downloading [====================> ] 155.7MB/375MB eca0188f477e Extracting [=============================> ] 22.02MB/37.17MB 04f6155c873d Extracting [============================> ] 60.72MB/107.3MB 55f2b468da67 Extracting [==============> ] 74.65MB/257.9MB e73cb4a42719 Extracting [=============> ] 29.52MB/109.1MB da3ed5db7103 Downloading [==============> ] 37.85MB/127.4MB 6ac0e4adf315 Extracting [=================================> ] 41.78MB/62.07MB eabd8714fec9 Downloading [======================> ] 168.1MB/375MB eca0188f477e Extracting [==================================> ] 25.95MB/37.17MB 04f6155c873d Extracting [=============================> ] 64.06MB/107.3MB 55f2b468da67 Extracting [===============> ] 81.33MB/257.9MB da3ed5db7103 Downloading [==================> ] 47.58MB/127.4MB e73cb4a42719 Extracting [===============> ] 34.54MB/109.1MB 6ac0e4adf315 Extracting [=========================================> ] 51.81MB/62.07MB be48959ad93c Pull complete c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB eabd8714fec9 Downloading [========================> ] 181.7MB/375MB eca0188f477e Extracting [=======================================> ] 29.49MB/37.17MB 2d1ceb071048 Downloading [> ] 539.6kB/152.1MB 04f6155c873d Extracting [===============================> ] 66.85MB/107.3MB da3ed5db7103 Downloading [========================> ] 62.72MB/127.4MB 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB 6ac0e4adf315 Extracting [================================================> ] 59.6MB/62.07MB eabd8714fec9 Downloading [=========================> ] 193.6MB/375MB 2d1ceb071048 Downloading [=> ] 5.406MB/152.1MB 04f6155c873d Extracting [================================> ] 69.07MB/107.3MB eca0188f477e Extracting [============================================> ] 33.03MB/37.17MB 55f2b468da67 Extracting [==================> ] 94.7MB/257.9MB da3ed5db7103 Downloading [============================> ] 72.99MB/127.4MB e73cb4a42719 Extracting [===================> ] 43.45MB/109.1MB eabd8714fec9 Downloading [===========================> ] 204.4MB/375MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB da3ed5db7103 Downloading [===============================> ] 79.48MB/127.4MB 2d1ceb071048 Downloading [===> ] 10.27MB/152.1MB 04f6155c873d Extracting [=================================> ] 71.86MB/107.3MB 55f2b468da67 Extracting [===================> ] 100.8MB/257.9MB e73cb4a42719 Extracting [====================> ] 45.12MB/109.1MB c70684a5e2f9 Pull complete 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB eabd8714fec9 Downloading [============================> ] 210.3MB/375MB policy-db-migrator Pulled 6ac0e4adf315 Pull complete da3ed5db7103 Downloading [====================================> ] 92.99MB/127.4MB 2d1ceb071048 Downloading [=====> ] 17.84MB/152.1MB 04f6155c873d Extracting [==================================> ] 74.09MB/107.3MB 55f2b468da67 Extracting [====================> ] 105.3MB/257.9MB e73cb4a42719 Extracting [======================> ] 49.02MB/109.1MB eca0188f477e Extracting [=================================================> ] 36.57MB/37.17MB eabd8714fec9 Downloading [=============================> ] 223.8MB/375MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB da3ed5db7103 Downloading [==========================================> ] 108.7MB/127.4MB 2d1ceb071048 Downloading [=========> ] 28.65MB/152.1MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 04f6155c873d Extracting [===================================> ] 76.87MB/107.3MB eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB eabd8714fec9 Downloading [===============================> ] 237.4MB/375MB da3ed5db7103 Downloading [================================================> ] 123.3MB/127.4MB f3b09c502777 Extracting [===> ] 4.456MB/56.52MB 04f6155c873d Extracting [=====================================> ] 79.66MB/107.3MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB e444bcd4d577 Pull complete f3b09c502777 Extracting [=======> ] 8.356MB/56.52MB 04f6155c873d Extracting [======================================> ] 83.56MB/107.3MB 2d1ceb071048 Downloading [=============> ] 40.01MB/152.1MB eabd8714fec9 Downloading [=================================> ] 249.8MB/375MB 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 04f6155c873d Extracting [========================================> ] 86.9MB/107.3MB 2d1ceb071048 Downloading [=================> ] 52.44MB/152.1MB eabd8714fec9 Downloading [===================================> ] 264.9MB/375MB 55f2b468da67 Extracting [======================> ] 117.5MB/257.9MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB 04f6155c873d Extracting [==========================================> ] 91.91MB/107.3MB b967ca84731b Downloading [> ] 228.3kB/22.8MB 2d1ceb071048 Downloading [=====================> ] 65.42MB/152.1MB eabd8714fec9 Downloading [=====================================> ] 278.4MB/375MB 55f2b468da67 Extracting [=======================> ] 120.9MB/257.9MB e73cb4a42719 Extracting [=============================> ] 64.06MB/109.1MB 04f6155c873d Extracting [=============================================> ] 97.48MB/107.3MB f3b09c502777 Extracting [================> ] 18.38MB/56.52MB b967ca84731b Downloading [===============> ] 7.11MB/22.8MB 2d1ceb071048 Downloading [=========================> ] 76.77MB/152.1MB eabd8714fec9 Downloading [======================================> ] 292MB/375MB 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB e73cb4a42719 Extracting [==============================> ] 67.4MB/109.1MB b967ca84731b Downloading [=====================================> ] 17.2MB/22.8MB 04f6155c873d Extracting [===============================================> ] 101.4MB/107.3MB 2d1ceb071048 Downloading [=============================> ] 90.29MB/152.1MB eabd8714fec9 Downloading [========================================> ] 303.3MB/375MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB b967ca84731b Verifying Checksum b967ca84731b Download complete 55f2b468da67 Extracting [========================> ] 128.1MB/257.9MB e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 2d1ceb071048 Downloading [=================================> ] 102.2MB/152.1MB eabd8714fec9 Downloading [==========================================> ] 315.2MB/375MB 04f6155c873d Extracting [================================================> ] 103.6MB/107.3MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 55f2b468da67 Extracting [=========================> ] 131.5MB/257.9MB e73cb4a42719 Extracting [===================================> ] 76.87MB/109.1MB 7ce54b4fc536 Downloading [==================================================>] 372B/372B 7ce54b4fc536 Verifying Checksum 7ce54b4fc536 Download complete 2d1ceb071048 Downloading [=====================================> ] 114.1MB/152.1MB eabd8714fec9 Downloading [===========================================> ] 329.3MB/375MB f3b09c502777 Extracting [============================> ] 32.31MB/56.52MB 04f6155c873d Extracting [================================================> ] 104.7MB/107.3MB 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB e73cb4a42719 Extracting [====================================> ] 80.22MB/109.1MB 2d1ceb071048 Downloading [=========================================> ] 127.1MB/152.1MB eabd8714fec9 Downloading [=============================================> ] 339MB/375MB f3b09c502777 Extracting [======================================> ] 44.01MB/56.52MB 04f6155c873d Extracting [=================================================> ] 107MB/107.3MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [==========================> ] 137MB/257.9MB 7ab7b54fb86b Downloading [> ] 539.6kB/107.2MB e73cb4a42719 Extracting [======================================> ] 84.12MB/109.1MB eabd8714fec9 Downloading [==============================================> ] 352MB/375MB 7ab7b54fb86b Downloading [> ] 1.621MB/107.2MB 55f2b468da67 Extracting [==========================> ] 138.1MB/257.9MB 2d1ceb071048 Downloading [==============================================> ] 140.6MB/152.1MB f3b09c502777 Extracting [=============================================> ] 51.25MB/56.52MB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 2d1ceb071048 Verifying Checksum 2d1ceb071048 Download complete eabd8714fec9 Downloading [=================================================> ] 367.7MB/375MB 7ab7b54fb86b Downloading [===> ] 8.109MB/107.2MB 55f2b468da67 Extracting [===========================> ] 141.5MB/257.9MB e73cb4a42719 Extracting [========================================> ] 89.13MB/109.1MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 7ab7b54fb86b Downloading [======> ] 12.98MB/107.2MB 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB 2d1ceb071048 Extracting [> ] 557.1kB/152.1MB 57bb65838ce6 Downloading [> ] 539.6kB/108.2MB 7ab7b54fb86b Downloading [========> ] 17.84MB/107.2MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 55f2b468da67 Extracting [============================> ] 147.1MB/257.9MB 2d1ceb071048 Extracting [> ] 2.228MB/152.1MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 7ab7b54fb86b Downloading [=============> ] 29.74MB/107.2MB 57bb65838ce6 Downloading [====> ] 9.731MB/108.2MB e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB 2d1ceb071048 Extracting [=> ] 5.014MB/152.1MB eabd8714fec9 Extracting [=> ] 12.81MB/375MB 7ab7b54fb86b Downloading [===================> ] 42.17MB/107.2MB 57bb65838ce6 Downloading [==========> ] 22.17MB/108.2MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 2d1ceb071048 Extracting [====> ] 14.48MB/152.1MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB eabd8714fec9 Extracting [==> ] 20.05MB/375MB 7ab7b54fb86b Downloading [========================> ] 53.53MB/107.2MB 57bb65838ce6 Downloading [===============> ] 34.06MB/108.2MB 2d1ceb071048 Extracting [========> ] 25.07MB/152.1MB e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB eabd8714fec9 Extracting [===> ] 23.4MB/375MB 7ab7b54fb86b Downloading [==============================> ] 65.96MB/107.2MB 57bb65838ce6 Downloading [=====================> ] 45.96MB/108.2MB 2d1ceb071048 Extracting [==========> ] 32.31MB/152.1MB 55f2b468da67 Extracting [==============================> ] 159.3MB/257.9MB eabd8714fec9 Extracting [===> ] 25.07MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 57bb65838ce6 Downloading [==========================> ] 57.85MB/108.2MB eabd8714fec9 Extracting [====> ] 30.64MB/375MB 7ab7b54fb86b Downloading [=====================================> ] 79.48MB/107.2MB 55f2b468da67 Extracting [===============================> ] 162.7MB/257.9MB 57bb65838ce6 Downloading [================================> ] 71.37MB/108.2MB 7ab7b54fb86b Downloading [============================================> ] 95.7MB/107.2MB eabd8714fec9 Extracting [====> ] 31.2MB/375MB 2d1ceb071048 Extracting [============> ] 38.44MB/152.1MB 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB 04f6155c873d Pull complete e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 7ab7b54fb86b Verifying Checksum 7ab7b54fb86b Download complete 57bb65838ce6 Downloading [======================================> ] 84.34MB/108.2MB eabd8714fec9 Extracting [=====> ] 42.34MB/375MB 2d1ceb071048 Extracting [===============> ] 47.35MB/152.1MB 55f2b468da67 Extracting [================================> ] 168.8MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 57bb65838ce6 Downloading [============================================> ] 96.78MB/108.2MB 55f2b468da67 Extracting [================================> ] 169.9MB/257.9MB 2d1ceb071048 Extracting [================> ] 51.25MB/152.1MB eabd8714fec9 Extracting [======> ] 47.91MB/375MB 57bb65838ce6 Downloading [===============================================> ] 103.8MB/108.2MB 2d1ceb071048 Extracting [===================> ] 59.05MB/152.1MB 57bb65838ce6 Verifying Checksum 57bb65838ce6 Download complete e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB eabd8714fec9 Extracting [=======> ] 55.71MB/375MB 2d1ceb071048 Extracting [======================> ] 67.96MB/152.1MB eabd8714fec9 Extracting [========> ] 61.28MB/375MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 2d1ceb071048 Extracting [========================> ] 75.2MB/152.1MB f3b09c502777 Pull complete eabd8714fec9 Extracting [=========> ] 67.96MB/375MB e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 2d1ceb071048 Extracting [==========================> ] 79.1MB/152.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 2d1ceb071048 Extracting [===========================> ] 84.67MB/152.1MB eabd8714fec9 Extracting [==========> ] 77.99MB/375MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Extracting [==========> ] 79.1MB/375MB 2d1ceb071048 Extracting [============================> ] 86.34MB/152.1MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [==========> ] 82.44MB/375MB 2d1ceb071048 Extracting [=============================> ] 88.57MB/152.1MB 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB e73cb4a42719 Pull complete eabd8714fec9 Extracting [===========> ] 88.01MB/375MB 2d1ceb071048 Extracting [==============================> ] 91.36MB/152.1MB 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB eabd8714fec9 Extracting [============> ] 94.7MB/375MB 2d1ceb071048 Extracting [===============================> ] 95.26MB/152.1MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 2d1ceb071048 Extracting [================================> ] 100.3MB/152.1MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [=============> ] 100.3MB/375MB 2d1ceb071048 Extracting [=================================> ] 103.1MB/152.1MB 55f2b468da67 Extracting [==================================> ] 176.6MB/257.9MB 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB eabd8714fec9 Extracting [==============> ] 106.4MB/375MB 2d1ceb071048 Extracting [==================================> ] 105.8MB/152.1MB 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB eabd8714fec9 Extracting [==============> ] 108.6MB/375MB 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 2d1ceb071048 Extracting [===================================> ] 108.6MB/152.1MB 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB eabd8714fec9 Extracting [==============> ] 112MB/375MB 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB 2d1ceb071048 Extracting [=====================================> ] 112.5MB/152.1MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Extracting [===============> ] 117MB/375MB 55f2b468da67 Extracting [===================================> ] 185.5MB/257.9MB 2d1ceb071048 Extracting [======================================> ] 116.4MB/152.1MB eabd8714fec9 Extracting [================> ] 121.4MB/375MB 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB 55f2b468da67 Extracting [=====================================> ] 191.1MB/257.9MB 2d1ceb071048 Extracting [=======================================> ] 119.2MB/152.1MB eabd8714fec9 Extracting [================> ] 124.8MB/375MB 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB 2d1ceb071048 Extracting [========================================> ] 123.1MB/152.1MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 2d1ceb071048 Extracting [=========================================> ] 126.5MB/152.1MB 408012a7b118 Pull complete eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB 2d1ceb071048 Extracting [=========================================> ] 127.6MB/152.1MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eabd8714fec9 Extracting [=================> ] 130.9MB/375MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 85dde7dceb0a Extracting [=========> ] 11.7MB/63.48MB 2d1ceb071048 Extracting [===========================================> ] 131.5MB/152.1MB a83b68436f09 Pull complete eabd8714fec9 Extracting [=================> ] 133.7MB/375MB 2d1ceb071048 Extracting [===========================================> ] 132.6MB/152.1MB 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 85dde7dceb0a Extracting [==========> ] 12.81MB/63.48MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [=================> ] 134.8MB/375MB 2d1ceb071048 Extracting [===========================================> ] 133.7MB/152.1MB eabd8714fec9 Extracting [==================> ] 137MB/375MB 2d1ceb071048 Extracting [============================================> ] 135.9MB/152.1MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 85dde7dceb0a Extracting [===========> ] 15.04MB/63.48MB 2d1ceb071048 Extracting [==============================================> ] 140.9MB/152.1MB eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 85dde7dceb0a Extracting [============> ] 16.15MB/63.48MB 44986281b8b9 Pull complete eabd8714fec9 Extracting [==================> ] 139.3MB/375MB 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB 2d1ceb071048 Extracting [===============================================> ] 144.8MB/152.1MB 2d1ceb071048 Extracting [===============================================> ] 145.9MB/152.1MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB eabd8714fec9 Extracting [==================> ] 140.4MB/375MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 787d6bee9571 Pull complete eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 2d1ceb071048 Extracting [================================================> ] 147.1MB/152.1MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 2d1ceb071048 Extracting [=================================================> ] 149.3MB/152.1MB 85dde7dceb0a Extracting [==============> ] 18.38MB/63.48MB eabd8714fec9 Extracting [===================> ] 144.3MB/375MB 2d1ceb071048 Extracting [=================================================> ] 151MB/152.1MB 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB 85dde7dceb0a Extracting [==============> ] 18.94MB/63.48MB 2d1ceb071048 Extracting [==================================================>] 152.1MB/152.1MB 2d1ceb071048 Extracting [==================================================>] 152.1MB/152.1MB eabd8714fec9 Extracting [===================> ] 146.5MB/375MB 85dde7dceb0a Extracting [================> ] 20.61MB/63.48MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB eabd8714fec9 Extracting [====================> ] 151.5MB/375MB 13ff0988aaea Extracting [==================================================>] 167B/167B 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 85dde7dceb0a Extracting [==================> ] 23.4MB/63.48MB 85dde7dceb0a Extracting [====================> ] 25.62MB/63.48MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 85dde7dceb0a Extracting [======================> ] 28.97MB/63.48MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB eabd8714fec9 Extracting [=====================> ] 158.8MB/375MB 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB eabd8714fec9 Extracting [=====================> ] 161.5MB/375MB 85dde7dceb0a Extracting [========================> ] 31.2MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB eabd8714fec9 Extracting [======================> ] 166MB/375MB 85dde7dceb0a Extracting [==========================> ] 33.42MB/63.48MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB 85dde7dceb0a Extracting [==========================> ] 33.98MB/63.48MB eabd8714fec9 Extracting [======================> ] 168.8MB/375MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB bf70c5107ab5 Pull complete 2d1ceb071048 Pull complete eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 85dde7dceb0a Extracting [============================> ] 36.21MB/63.48MB eabd8714fec9 Extracting [=========================> ] 187.7MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB 85dde7dceb0a Extracting [==============================> ] 38.44MB/63.48MB eabd8714fec9 Extracting [==========================> ] 197.2MB/375MB 85dde7dceb0a Extracting [================================> ] 41.78MB/63.48MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB eabd8714fec9 Extracting [===========================> ] 207.8MB/375MB 85dde7dceb0a Extracting [===================================> ] 44.56MB/63.48MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 85dde7dceb0a Extracting [=====================================> ] 47.35MB/63.48MB 13ff0988aaea Pull complete 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 85dde7dceb0a Extracting [=======================================> ] 49.58MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 233.4MB/257.9MB eabd8714fec9 Extracting [=============================> ] 222.8MB/375MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 85dde7dceb0a Extracting [========================================> ] 51.81MB/63.48MB b967ca84731b Extracting [> ] 229.4kB/22.8MB eabd8714fec9 Extracting [==============================> ] 227.8MB/375MB 55f2b468da67 Extracting [==============================================> ] 237.3MB/257.9MB b967ca84731b Extracting [=> ] 688.1kB/22.8MB 85dde7dceb0a Extracting [==========================================> ] 54.59MB/63.48MB b967ca84731b Extracting [===============> ] 6.881MB/22.8MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB eabd8714fec9 Extracting [===============================> ] 232.8MB/375MB b967ca84731b Extracting [==================> ] 8.258MB/22.8MB 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB eabd8714fec9 Extracting [===============================> ] 233.4MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB b967ca84731b Extracting [==========================> ] 11.93MB/22.8MB eabd8714fec9 Extracting [================================> ] 241.8MB/375MB b967ca84731b Extracting [============================> ] 12.85MB/22.8MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB b967ca84731b Extracting [=============================> ] 13.53MB/22.8MB 85dde7dceb0a Extracting [================================================> ] 61.83MB/63.48MB eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB b967ca84731b Extracting [===================================> ] 16.29MB/22.8MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [=================================> ] 250.7MB/375MB 55f2b468da67 Extracting [================================================> ] 251.2MB/257.9MB b967ca84731b Extracting [====================================> ] 16.74MB/22.8MB eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB b967ca84731b Extracting [=====================================> ] 16.97MB/22.8MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB b967ca84731b Extracting [==========================================> ] 19.5MB/22.8MB 55f2b468da67 Extracting [=================================================> ] 256.2MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB b967ca84731b Extracting [=============================================> ] 20.64MB/22.8MB b967ca84731b Extracting [==================================================>] 22.8MB/22.8MB eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 1ccde423731d Pull complete eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB eabd8714fec9 Extracting [====================================> ] 276.3MB/375MB eabd8714fec9 Extracting [=====================================> ] 282.4MB/375MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eabd8714fec9 Extracting [======================================> ] 288.6MB/375MB eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 4b82842ab819 Pull complete eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB eabd8714fec9 Extracting [=========================================> ] 309.2MB/375MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 85dde7dceb0a Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB eabd8714fec9 Extracting [==========================================> ] 320.9MB/375MB eabd8714fec9 Extracting [===========================================> ] 324.2MB/375MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB b967ca84731b Pull complete eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 55f2b468da67 Pull complete 7221d93db8a9 Pull complete 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 7e568a0dc8fb Pull complete eabd8714fec9 Extracting [===============================================> ] 357.6MB/375MB eabd8714fec9 Extracting [================================================> ] 367.1MB/375MB eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 7009d5001b77 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB 7ce54b4fc536 Extracting [==================================================>] 372B/372B 7ce54b4fc536 Extracting [==================================================>] 372B/372B eabd8714fec9 Pull complete 7df673c7455d Pull complete 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB postgres Pulled prometheus Pulled 82bfc142787e Extracting [======================> ] 3.834MB/8.613MB 7ce54b4fc536 Pull complete 538deb30e80c Pull complete 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB grafana Pulled 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 7ab7b54fb86b Extracting [> ] 557.1kB/107.2MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 8f10199ed94b Extracting [=================> ] 3.146MB/8.768MB 7ab7b54fb86b Extracting [========> ] 17.83MB/107.2MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 46baca71a4ef Pull complete 7ab7b54fb86b Extracting [===============> ] 32.31MB/107.2MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB f963a77d2726 Pull complete 7ab7b54fb86b Extracting [=====================> ] 45.12MB/107.2MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB b0e0ef7895f4 Extracting [===============> ] 11.8MB/37.01MB 7ab7b54fb86b Extracting [============================> ] 60.16MB/107.2MB b0e0ef7895f4 Extracting [=============================> ] 21.63MB/37.01MB f3a82e9f1761 Extracting [============> ] 11.01MB/44.41MB 7ab7b54fb86b Extracting [==================================> ] 74.65MB/107.2MB b0e0ef7895f4 Extracting [=========================================> ] 30.67MB/37.01MB f3a82e9f1761 Extracting [=====================> ] 19.27MB/44.41MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 7ab7b54fb86b Extracting [==========================================> ] 90.24MB/107.2MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB f3a82e9f1761 Extracting [================================> ] 28.9MB/44.41MB 7ab7b54fb86b Extracting [================================================> ] 104.7MB/107.2MB 7ab7b54fb86b Extracting [==================================================>] 107.2MB/107.2MB 7ab7b54fb86b Pull complete c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B f3a82e9f1761 Extracting [===============================================> ] 42.21MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 57bb65838ce6 Extracting [> ] 557.1kB/108.2MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 57bb65838ce6 Extracting [=====> ] 12.81MB/108.2MB 79161a3f5362 Pull complete 40a5eed61bb0 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 57bb65838ce6 Extracting [==========> ] 22.84MB/108.2MB e040ea11fa10 Pull complete 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 57bb65838ce6 Extracting [================> ] 36.77MB/108.2MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 57bb65838ce6 Extracting [=======================> ] 50.69MB/108.2MB 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 09d5a3f70313 Extracting [=====> ] 12.81MB/109.2MB 57bb65838ce6 Extracting [=============================> ] 63.5MB/108.2MB 09d5a3f70313 Extracting [===========> ] 26.18MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 57bb65838ce6 Extracting [===================================> ] 75.76MB/108.2MB 09d5a3f70313 Extracting [=================> ] 38.44MB/109.2MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 57bb65838ce6 Extracting [=========================================> ] 90.8MB/108.2MB 09d5a3f70313 Extracting [=========================> ] 55.15MB/109.2MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 57bb65838ce6 Extracting [=================================================> ] 106.4MB/108.2MB 57bb65838ce6 Extracting [==================================================>] 108.2MB/108.2MB 09d5a3f70313 Extracting [===============================> ] 69.07MB/109.2MB 71a9f6a9ab4d Pull complete 57bb65838ce6 Pull complete drools-pdp Pulled da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09d5a3f70313 Extracting [=====================================> ] 81.89MB/109.2MB da3ed5db7103 Extracting [====> ] 12.26MB/127.4MB 09d5a3f70313 Extracting [=============================================> ] 99.71MB/109.2MB 09d5a3f70313 Extracting [================================================> ] 105.8MB/109.2MB da3ed5db7103 Extracting [==========> ] 26.74MB/127.4MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB da3ed5db7103 Extracting [=================> ] 43.45MB/127.4MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB da3ed5db7103 Extracting [========================> ] 63.5MB/127.4MB 356f5c2c843b Pull complete kafka Pulled da3ed5db7103 Extracting [===============================> ] 80.22MB/127.4MB da3ed5db7103 Extracting [======================================> ] 98.6MB/127.4MB da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 127MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container prometheus Created Container postgres Created Container grafana Creating Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container zookeeper Starting Container postgres Starting Container prometheus Starting Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container prometheus Started Container grafana Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for drools-pdp to start... Checking if REST port 30216 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:761401b56ec1300d86c45791f247e89ccd22d61d7314f8f10cb35b3c8783ee8b top - 07:47:56 up 4 min, 0 users, load average: 2.31, 1.61, 0.69 Tasks: 230 total, 1 running, 152 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.1 us, 3.8 sy, 0.0 ni, 77.2 id, 3.7 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 21G 27M 7.6G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9bbcdf60db5b policy-drools-pdp 0.82% 279.9MiB / 31.41GiB 0.87% 33.3kB / 42.6kB 0B / 8.19kB 54 a74e41961eaa policy-pap 0.66% 540.2MiB / 31.41GiB 1.68% 84.2kB / 127kB 0B / 139MB 68 10f8986c0ffd policy-api 0.12% 431.2MiB / 31.41GiB 1.34% 1.15MB / 1.02MB 0B / 0B 57 6b6c32ce41ab kafka 1.80% 392.2MiB / 31.41GiB 1.22% 158kB / 141kB 8.19kB / 573kB 83 6e8134223855 grafana 0.18% 102.7MiB / 31.41GiB 0.32% 19MB / 242kB 4.1kB / 31.4MB 22 aa35a49306c5 postgres 0.00% 85.43MiB / 31.41GiB 0.27% 1.64MB / 1.71MB 0B / 157MB 26 035229ef3be7 zookeeper 0.21% 83.18MiB / 31.41GiB 0.26% 52.7kB / 44.4kB 229kB / 381kB 62 7d071d9a02de prometheus 0.00% 20.23MiB / 31.41GiB 0.06% 88.8kB / 3.37kB 0B / 0B 13 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-20T07:46:13.024623052Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-20T07:46:13Z grafana | logger=settings t=2025-06-20T07:46:13.025090484Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-20T07:46:13.025107814Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-20T07:46:13.025113784Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-20T07:46:13.025118495Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-20T07:46:13.025123055Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-20T07:46:13.025127455Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-20T07:46:13.025132625Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-20T07:46:13.025137485Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-20T07:46:13.025141605Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-20T07:46:13.025175376Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-20T07:46:13.025183456Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-20T07:46:13.025189256Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-20T07:46:13.025201347Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-20T07:46:13.025206417Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-20T07:46:13.025211127Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-20T07:46:13.025217017Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-20T07:46:13.025221797Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-20T07:46:13.025250608Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-20T07:46:13.02572151Z level=info msg=FeatureToggles nestedFolders=true alertingRulePermanentlyDelete=true alertingRuleVersionHistoryRestore=true lokiLabelNamesQueryApi=true recordedQueriesMulti=true dashboardSceneSolo=true pluginsDetailsRightPanel=true publicDashboardsScene=true alertingQueryAndExpressionsStepMode=true recoveryThreshold=true cloudWatchCrossAccountQuerying=true logRowsPopoverMenu=true onPremToCloudMigrations=true useSessionStorageForRedirection=true logsExploreTableVisualisation=true alertingRuleRecoverDeleted=true kubernetesClientDashboardsFolders=true transformationsRedesign=true correlations=true reportingUseRawTimeRange=true alertingApiServer=true alertRuleRestore=true promQLScope=true logsPanelControls=true groupToNestedTableTransformation=true dataplaneFrontendFallback=true formatString=true lokiStructuredMetadata=true prometheusAzureOverrideAudience=true alertingUIOptimizeReducer=true prometheusUsesCombobox=true lokiQueryHints=true newFiltersUI=true lokiQuerySplitting=true ssoSettingsSAML=true newPDFRendering=true tlsMemcached=true alertingInsights=true unifiedRequestLog=true dashgpt=true failWrongDSUID=true influxdbBackendMigration=true cloudWatchNewLabelParsing=true addFieldFromCalculationStatFunctions=true dashboardSceneForViewers=true angularDeprecationUI=true logsInfiniteScrolling=true annotationPermissionUpdate=true alertingSimplifiedRouting=true panelMonitoring=true externalCorePlugins=true awsAsyncQueryCaching=true grafanaconThemes=true newDashboardSharingComponent=true preinstallAutoUpdate=true ssoSettingsApi=true logsContextDatasourceUi=true pinNavItems=true kubernetesPlaylists=true cloudWatchRoundUpEndTime=true dashboardScene=true unifiedStorageSearchPermissionFiltering=true alertingNotificationsStepMode=true azureMonitorEnableUserAuth=true azureMonitorPrometheusExemplars=true grafana | logger=sqlstore t=2025-06-20T07:46:13.025834483Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-20T07:46:13.025866984Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-20T07:46:13.027758925Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-20T07:46:13.027775575Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-20T07:46:13.028676719Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-20T07:46:13.029799869Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.12242ms grafana | logger=migrator t=2025-06-20T07:46:13.037809711Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-20T07:46:13.038440059Z level=info msg="Migration successfully executed" id="create user table" duration=630.038µs grafana | logger=migrator t=2025-06-20T07:46:13.046496523Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-20T07:46:13.048939267Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=2.443474ms grafana | logger=migrator t=2025-06-20T07:46:13.055570454Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-20T07:46:13.05654704Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=976.416µs grafana | logger=migrator t=2025-06-20T07:46:13.066073154Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-20T07:46:13.06671237Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=638.726µs grafana | logger=migrator t=2025-06-20T07:46:13.070425079Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-20T07:46:13.070958173Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=533.144µs grafana | logger=migrator t=2025-06-20T07:46:13.07723043Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-20T07:46:13.081913384Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.680554ms grafana | logger=migrator t=2025-06-20T07:46:13.087668387Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-20T07:46:13.091755776Z level=info msg="Migration successfully executed" id="create user table v2" duration=4.086559ms grafana | logger=migrator t=2025-06-20T07:46:13.099524062Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-20T07:46:13.100495348Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=970.936µs grafana | logger=migrator t=2025-06-20T07:46:13.106544849Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-20T07:46:13.107257978Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=712.309µs grafana | logger=migrator t=2025-06-20T07:46:13.110593706Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:13.111207143Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=613.077µs grafana | logger=migrator t=2025-06-20T07:46:13.116104073Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-20T07:46:13.116993437Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=888.904µs grafana | logger=migrator t=2025-06-20T07:46:13.120745986Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-20T07:46:13.121885256Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.13852ms grafana | logger=migrator t=2025-06-20T07:46:13.136163857Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-20T07:46:13.136204988Z level=info msg="Migration successfully executed" id="Update user table charset" duration=42.331µs grafana | logger=migrator t=2025-06-20T07:46:13.142565607Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-20T07:46:13.145899645Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=3.334918ms grafana | logger=migrator t=2025-06-20T07:46:13.156463506Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-20T07:46:13.156808695Z level=info msg="Migration successfully executed" id="Add missing user data" duration=345.539µs grafana | logger=migrator t=2025-06-20T07:46:13.163984845Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-20T07:46:13.164933801Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=948.856µs grafana | logger=migrator t=2025-06-20T07:46:13.16940816Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-20T07:46:13.169994596Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=585.966µs grafana | logger=migrator t=2025-06-20T07:46:13.174164926Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-20T07:46:13.176148089Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.981953ms grafana | logger=migrator t=2025-06-20T07:46:13.181308457Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-20T07:46:13.190920312Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.610945ms grafana | logger=migrator t=2025-06-20T07:46:13.199715735Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-20T07:46:13.201127724Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.429528ms grafana | logger=migrator t=2025-06-20T07:46:13.206872716Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-20T07:46:13.207319268Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=452.272µs grafana | logger=migrator t=2025-06-20T07:46:13.211661814Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-20T07:46:13.212452334Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=790.48µs grafana | logger=migrator t=2025-06-20T07:46:13.219984254Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-20T07:46:13.221203537Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.218853ms grafana | logger=migrator t=2025-06-20T07:46:13.225431949Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-20T07:46:13.226202Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=773.821µs grafana | logger=migrator t=2025-06-20T07:46:13.237650874Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-20T07:46:13.238378223Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=723.149µs grafana | logger=migrator t=2025-06-20T07:46:13.242143143Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-20T07:46:13.243549291Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=1.402798ms grafana | logger=migrator t=2025-06-20T07:46:13.250142326Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-20T07:46:13.250537426Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=394.43µs grafana | logger=migrator t=2025-06-20T07:46:13.254198004Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-20T07:46:13.254874302Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=675.638µs grafana | logger=migrator t=2025-06-20T07:46:13.26043012Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-20T07:46:13.261497538Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.041237ms grafana | logger=migrator t=2025-06-20T07:46:13.268608597Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-20T07:46:13.269361767Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=752.81µs grafana | logger=migrator t=2025-06-20T07:46:13.273315003Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-20T07:46:13.274533985Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.218722ms grafana | logger=migrator t=2025-06-20T07:46:13.282409724Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-20T07:46:13.283112723Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=702.949µs grafana | logger=migrator t=2025-06-20T07:46:13.288421664Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-20T07:46:13.288453515Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=73.502µs grafana | logger=migrator t=2025-06-20T07:46:13.296626802Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-20T07:46:13.297735781Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.109089ms grafana | logger=migrator t=2025-06-20T07:46:13.301854481Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-20T07:46:13.302616721Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=761.64µs grafana | logger=migrator t=2025-06-20T07:46:13.306321449Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-20T07:46:13.307016178Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=694.539µs grafana | logger=migrator t=2025-06-20T07:46:13.315935765Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-20T07:46:13.317485466Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.553431ms grafana | logger=migrator t=2025-06-20T07:46:13.322878679Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:13.327162004Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.274355ms grafana | logger=migrator t=2025-06-20T07:46:13.331647242Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-20T07:46:13.332257329Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=609.727µs grafana | logger=migrator t=2025-06-20T07:46:13.339777639Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-20T07:46:13.340312323Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=534.614µs grafana | logger=migrator t=2025-06-20T07:46:13.347033382Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-20T07:46:13.348235383Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.201421ms grafana | logger=migrator t=2025-06-20T07:46:13.352037015Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-20T07:46:13.352948429Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=910.604µs grafana | logger=migrator t=2025-06-20T07:46:13.356685378Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-20T07:46:13.357828519Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.142641ms grafana | logger=migrator t=2025-06-20T07:46:13.363783837Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:13.364435234Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=650.887µs grafana | logger=migrator t=2025-06-20T07:46:13.368281976Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:13.369316554Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.033908ms grafana | logger=migrator t=2025-06-20T07:46:13.375529579Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-20T07:46:13.376161926Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=631.557µs grafana | logger=migrator t=2025-06-20T07:46:13.379918836Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-20T07:46:13.381017405Z level=info msg="Migration successfully executed" id="create star table" duration=1.097859ms grafana | logger=migrator t=2025-06-20T07:46:13.386425219Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-20T07:46:13.387117207Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=691.578µs grafana | logger=migrator t=2025-06-20T07:46:13.390283841Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-20T07:46:13.392043918Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.759857ms grafana | logger=migrator t=2025-06-20T07:46:13.395772497Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-20T07:46:13.397165685Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.392568ms grafana | logger=migrator t=2025-06-20T07:46:13.403211315Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-20T07:46:13.404641953Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.429798ms grafana | logger=migrator t=2025-06-20T07:46:13.408681491Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-20T07:46:13.409925663Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.243242ms grafana | logger=migrator t=2025-06-20T07:46:13.413711135Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-20T07:46:13.414860225Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.14846ms grafana | logger=migrator t=2025-06-20T07:46:13.419315714Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-20T07:46:13.420426663Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.111039ms grafana | logger=migrator t=2025-06-20T07:46:13.435939856Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-20T07:46:13.437611729Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.670814ms grafana | logger=migrator t=2025-06-20T07:46:13.442385227Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-20T07:46:13.443253879Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=868.522µs grafana | logger=migrator t=2025-06-20T07:46:13.447094402Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-20T07:46:13.447981385Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=886.483µs grafana | logger=migrator t=2025-06-20T07:46:13.4523283Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-20T07:46:13.453154743Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=826.413µs grafana | logger=migrator t=2025-06-20T07:46:13.458338741Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-20T07:46:13.458362932Z level=info msg="Migration successfully executed" id="Update org table charset" duration=25.091µs grafana | logger=migrator t=2025-06-20T07:46:13.461256118Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-20T07:46:13.46132101Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=63.602µs grafana | logger=migrator t=2025-06-20T07:46:13.465552812Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-20T07:46:13.465814829Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=261.207µs grafana | logger=migrator t=2025-06-20T07:46:13.47072401Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-20T07:46:13.47148122Z level=info msg="Migration successfully executed" id="create dashboard table" duration=756.55µs grafana | logger=migrator t=2025-06-20T07:46:13.477975083Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-20T07:46:13.479202855Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.224192ms grafana | logger=migrator t=2025-06-20T07:46:13.483115329Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-20T07:46:13.484312261Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.196322ms grafana | logger=migrator t=2025-06-20T07:46:13.488339238Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-20T07:46:13.489233512Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=893.994µs grafana | logger=migrator t=2025-06-20T07:46:13.49331553Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-20T07:46:13.494106011Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=790.061µs grafana | logger=migrator t=2025-06-20T07:46:13.500881072Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-20T07:46:13.502197746Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.316664ms grafana | logger=migrator t=2025-06-20T07:46:13.506547802Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-20T07:46:13.514037401Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.489599ms grafana | logger=migrator t=2025-06-20T07:46:13.518344326Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-20T07:46:13.519348002Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.019297ms grafana | logger=migrator t=2025-06-20T07:46:13.52640846Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-20T07:46:13.52793743Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.5313ms grafana | logger=migrator t=2025-06-20T07:46:13.533801717Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-20T07:46:13.534615868Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=812.051µs grafana | logger=migrator t=2025-06-20T07:46:13.538327637Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:13.53884028Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=512.353µs grafana | logger=migrator t=2025-06-20T07:46:13.542930589Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-20T07:46:13.544212403Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.281524ms grafana | logger=migrator t=2025-06-20T07:46:13.549997007Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-20T07:46:13.550016007Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=19.43µs grafana | logger=migrator t=2025-06-20T07:46:13.556838819Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-20T07:46:13.559756837Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.917278ms grafana | logger=migrator t=2025-06-20T07:46:13.563465205Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-20T07:46:13.565485059Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.020184ms grafana | logger=migrator t=2025-06-20T07:46:13.568720315Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.570497842Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.777117ms grafana | logger=migrator t=2025-06-20T07:46:13.575554076Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.576259395Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=705.009µs grafana | logger=migrator t=2025-06-20T07:46:13.590879424Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.594457189Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.580345ms grafana | logger=migrator t=2025-06-20T07:46:13.599893823Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.601013713Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.12223ms grafana | logger=migrator t=2025-06-20T07:46:13.604695651Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-20T07:46:13.605435111Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=739.14µs grafana | logger=migrator t=2025-06-20T07:46:13.611188533Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-20T07:46:13.611227545Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=41.331µs grafana | logger=migrator t=2025-06-20T07:46:13.620148661Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-20T07:46:13.620188153Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=40.051µs grafana | logger=migrator t=2025-06-20T07:46:13.624052766Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.627385554Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.331698ms grafana | logger=migrator t=2025-06-20T07:46:13.637493903Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.639462955Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.968442ms grafana | logger=migrator t=2025-06-20T07:46:13.645223828Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.648789774Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.566796ms grafana | logger=migrator t=2025-06-20T07:46:13.652275856Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.654219587Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.943371ms grafana | logger=migrator t=2025-06-20T07:46:13.657670499Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.657881305Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=210.726µs grafana | logger=migrator t=2025-06-20T07:46:13.662419065Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:13.663164936Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=745.101µs grafana | logger=migrator t=2025-06-20T07:46:13.671143138Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-20T07:46:13.672484963Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.341065ms grafana | logger=migrator t=2025-06-20T07:46:13.67837807Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-20T07:46:13.678446841Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=72.952µs grafana | logger=migrator t=2025-06-20T07:46:13.684691567Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-20T07:46:13.685692604Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.001607ms grafana | logger=migrator t=2025-06-20T07:46:13.689277829Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-20T07:46:13.690038529Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=760.89µs grafana | logger=migrator t=2025-06-20T07:46:13.69454767Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:13.700751014Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.201754ms grafana | logger=migrator t=2025-06-20T07:46:13.708327866Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-20T07:46:13.709315192Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=987.396µs grafana | logger=migrator t=2025-06-20T07:46:13.713123864Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-20T07:46:13.71413915Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.015046ms grafana | logger=migrator t=2025-06-20T07:46:13.717607872Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-20T07:46:13.718635469Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.026287ms grafana | logger=migrator t=2025-06-20T07:46:13.725579014Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:13.725866512Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=287.688µs grafana | logger=migrator t=2025-06-20T07:46:13.731180933Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:13.732155529Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=974.376µs grafana | logger=migrator t=2025-06-20T07:46:13.735976911Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-20T07:46:13.74047489Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.497519ms grafana | logger=migrator t=2025-06-20T07:46:13.74460832Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-20T07:46:13.7453676Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=763.11µs grafana | logger=migrator t=2025-06-20T07:46:13.75098622Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-20T07:46:13.751163124Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=177.034µs grafana | logger=migrator t=2025-06-20T07:46:13.754867253Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-20T07:46:13.755041827Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=174.985µs grafana | logger=migrator t=2025-06-20T07:46:13.758761237Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-20T07:46:13.760286707Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.52471ms grafana | logger=migrator t=2025-06-20T07:46:13.768578437Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.771342491Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.766584ms grafana | logger=migrator t=2025-06-20T07:46:13.775180523Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.777592547Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.411884ms grafana | logger=migrator t=2025-06-20T07:46:13.781372838Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-20T07:46:13.782187519Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=813.611µs grafana | logger=migrator t=2025-06-20T07:46:13.789165545Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-20T07:46:13.791413364Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.247609ms grafana | logger=migrator t=2025-06-20T07:46:13.796801957Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-20T07:46:13.800598919Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=3.796132ms grafana | logger=migrator t=2025-06-20T07:46:13.804487682Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-20T07:46:13.805587492Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=1.0992ms grafana | logger=migrator t=2025-06-20T07:46:13.809327411Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-20T07:46:13.811788456Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.461005ms grafana | logger=migrator t=2025-06-20T07:46:13.818189486Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-20T07:46:13.818992917Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=803.441µs grafana | logger=migrator t=2025-06-20T07:46:13.823955169Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-20T07:46:13.824598936Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=643.907µs grafana | logger=migrator t=2025-06-20T07:46:13.836644777Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-20T07:46:13.838071895Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.426167ms grafana | logger=migrator t=2025-06-20T07:46:13.845740979Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-20T07:46:13.846987961Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.246732ms grafana | logger=migrator t=2025-06-20T07:46:13.852815256Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-20T07:46:13.854150042Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.335266ms grafana | logger=migrator t=2025-06-20T07:46:13.857995064Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-20T07:46:13.858765025Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=772.891µs grafana | logger=migrator t=2025-06-20T07:46:13.864815986Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-20T07:46:13.866004207Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.187021ms grafana | logger=migrator t=2025-06-20T07:46:13.870119907Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-20T07:46:13.879697721Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.577874ms grafana | logger=migrator t=2025-06-20T07:46:13.883486632Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-20T07:46:13.884378046Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=891.184µs grafana | logger=migrator t=2025-06-20T07:46:13.890885699Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-20T07:46:13.891739971Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=865.033µs grafana | logger=migrator t=2025-06-20T07:46:13.895860201Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-20T07:46:13.896788755Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=928.364µs grafana | logger=migrator t=2025-06-20T07:46:13.904375087Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-20T07:46:13.905368733Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=993.676µs grafana | logger=migrator t=2025-06-20T07:46:13.910641354Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-20T07:46:13.913677724Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.03667ms grafana | logger=migrator t=2025-06-20T07:46:13.917551117Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-20T07:46:13.920028903Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.475686ms grafana | logger=migrator t=2025-06-20T07:46:13.925498738Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-20T07:46:13.925524939Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.441µs grafana | logger=migrator t=2025-06-20T07:46:13.928989152Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-20T07:46:13.929366062Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=377.36µs grafana | logger=migrator t=2025-06-20T07:46:13.933239854Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-20T07:46:13.937761885Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.521201ms grafana | logger=migrator t=2025-06-20T07:46:13.942002917Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-20T07:46:13.942211773Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=208.426µs grafana | logger=migrator t=2025-06-20T07:46:13.948192092Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-20T07:46:13.948767478Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=579.147µs grafana | logger=migrator t=2025-06-20T07:46:13.95379033Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-20T07:46:13.958634019Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.842969ms grafana | logger=migrator t=2025-06-20T07:46:13.964031853Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-20T07:46:13.964321301Z level=info msg="Migration successfully executed" id="Update uid value" duration=290.737µs grafana | logger=migrator t=2025-06-20T07:46:13.970907446Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:13.972165199Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.303364ms grafana | logger=migrator t=2025-06-20T07:46:13.976139044Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-20T07:46:13.977110611Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=971.177µs grafana | logger=migrator t=2025-06-20T07:46:13.980889381Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-20T07:46:13.983640674Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.746593ms grafana | logger=migrator t=2025-06-20T07:46:13.987865827Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-20T07:46:13.990427355Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.561028ms grafana | logger=migrator t=2025-06-20T07:46:13.99663632Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-20T07:46:13.996677591Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=41.801µs grafana | logger=migrator t=2025-06-20T07:46:14.000354068Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-20T07:46:14.001369025Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.015047ms grafana | logger=migrator t=2025-06-20T07:46:14.005244839Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-20T07:46:14.006672148Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.427089ms grafana | logger=migrator t=2025-06-20T07:46:14.014256365Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-20T07:46:14.015321763Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.064918ms grafana | logger=migrator t=2025-06-20T07:46:14.019301751Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-20T07:46:14.020555445Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.218463ms grafana | logger=migrator t=2025-06-20T07:46:14.024480922Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-20T07:46:14.025484238Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=998.936µs grafana | logger=migrator t=2025-06-20T07:46:14.045536621Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-20T07:46:14.047820453Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=2.283492ms grafana | logger=migrator t=2025-06-20T07:46:14.052092149Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-20T07:46:14.052930071Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=837.522µs grafana | logger=migrator t=2025-06-20T07:46:14.059080678Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-20T07:46:14.072745477Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=13.66429ms grafana | logger=migrator t=2025-06-20T07:46:14.080388405Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-20T07:46:14.081181816Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=792.912µs grafana | logger=migrator t=2025-06-20T07:46:14.084708271Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-20T07:46:14.085834912Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.125841ms grafana | logger=migrator t=2025-06-20T07:46:14.092567634Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-20T07:46:14.093972611Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.404757ms grafana | logger=migrator t=2025-06-20T07:46:14.09873045Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-20T07:46:14.100993182Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.262242ms grafana | logger=migrator t=2025-06-20T07:46:14.112202995Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:14.112569595Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=367.96µs grafana | logger=migrator t=2025-06-20T07:46:14.116778559Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-20T07:46:14.117610251Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=831.452µs grafana | logger=migrator t=2025-06-20T07:46:14.121878136Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-20T07:46:14.121917787Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=40.671µs grafana | logger=migrator t=2025-06-20T07:46:14.126591914Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-20T07:46:14.130728997Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.135493ms grafana | logger=migrator t=2025-06-20T07:46:14.138462765Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-20T07:46:14.141376324Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.912749ms grafana | logger=migrator t=2025-06-20T07:46:14.145664521Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-20T07:46:14.145915357Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=250.447µs grafana | logger=migrator t=2025-06-20T07:46:14.149876994Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-20T07:46:14.154409757Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.530353ms grafana | logger=migrator t=2025-06-20T07:46:14.161076417Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-20T07:46:14.16377821Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.700743ms grafana | logger=migrator t=2025-06-20T07:46:14.168990991Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-20T07:46:14.169771513Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=779.852µs grafana | logger=migrator t=2025-06-20T07:46:14.175494398Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-20T07:46:14.176084934Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=588.996µs grafana | logger=migrator t=2025-06-20T07:46:14.181342776Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-20T07:46:14.1825989Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.255894ms grafana | logger=migrator t=2025-06-20T07:46:14.186920087Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-20T07:46:14.1877842Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=860.743µs grafana | logger=migrator t=2025-06-20T07:46:14.19367496Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-20T07:46:14.194977375Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.301556ms grafana | logger=migrator t=2025-06-20T07:46:14.201033409Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-20T07:46:14.202405196Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.374687ms grafana | logger=migrator t=2025-06-20T07:46:14.207236687Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-20T07:46:14.207255707Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=19.75µs grafana | logger=migrator t=2025-06-20T07:46:14.211340787Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-20T07:46:14.211380098Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=40.481µs grafana | logger=migrator t=2025-06-20T07:46:14.216040425Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-20T07:46:14.220424774Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.384159ms grafana | logger=migrator t=2025-06-20T07:46:14.225597353Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-20T07:46:14.22842866Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.828316ms grafana | logger=migrator t=2025-06-20T07:46:14.244756041Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-20T07:46:14.244784742Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=30.121µs grafana | logger=migrator t=2025-06-20T07:46:14.249732176Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-20T07:46:14.251016872Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.283456ms grafana | logger=migrator t=2025-06-20T07:46:14.256791508Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-20T07:46:14.258076502Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.284414ms grafana | logger=migrator t=2025-06-20T07:46:14.262599305Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-20T07:46:14.262639486Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=41.301µs grafana | logger=migrator t=2025-06-20T07:46:14.266845609Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-20T07:46:14.267689412Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=843.073µs grafana | logger=migrator t=2025-06-20T07:46:14.272740309Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-20T07:46:14.274043055Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.300975ms grafana | logger=migrator t=2025-06-20T07:46:14.279760929Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-20T07:46:14.285834063Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.073924ms grafana | logger=migrator t=2025-06-20T07:46:14.289959495Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-20T07:46:14.289988215Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.571µs grafana | logger=migrator t=2025-06-20T07:46:14.295047042Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-20T07:46:14.295560626Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=511.794µs grafana | logger=migrator t=2025-06-20T07:46:14.300416088Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-20T07:46:14.312470454Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=12.054076ms grafana | logger=migrator t=2025-06-20T07:46:14.317074739Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-20T07:46:14.317683985Z level=info msg="Migration successfully executed" id="create session table" duration=606.916µs grafana | logger=migrator t=2025-06-20T07:46:14.322710161Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-20T07:46:14.322899046Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=188.345µs grafana | logger=migrator t=2025-06-20T07:46:14.326788452Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-20T07:46:14.326866714Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=78.502µs grafana | logger=migrator t=2025-06-20T07:46:14.331452547Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-20T07:46:14.332570338Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.117121ms grafana | logger=migrator t=2025-06-20T07:46:14.33819394Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-20T07:46:14.339940947Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.746227ms grafana | logger=migrator t=2025-06-20T07:46:14.34596574Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-20T07:46:14.346011551Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=47.051µs grafana | logger=migrator t=2025-06-20T07:46:14.35038445Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-20T07:46:14.350411291Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.261µs grafana | logger=migrator t=2025-06-20T07:46:14.354833491Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-20T07:46:14.362791725Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=7.956914ms grafana | logger=migrator t=2025-06-20T07:46:14.367201635Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-20T07:46:14.370595697Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.393822ms grafana | logger=migrator t=2025-06-20T07:46:14.376306801Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-20T07:46:14.376421294Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=87.582µs grafana | logger=migrator t=2025-06-20T07:46:14.389255192Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-20T07:46:14.3895433Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=287.338µs grafana | logger=migrator t=2025-06-20T07:46:14.395342496Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-20T07:46:14.396801646Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.4578ms grafana | logger=migrator t=2025-06-20T07:46:14.402505601Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-20T07:46:14.402530841Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.14µs grafana | logger=migrator t=2025-06-20T07:46:14.407981799Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-20T07:46:14.411501394Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.518475ms grafana | logger=migrator t=2025-06-20T07:46:14.417880096Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-20T07:46:14.418042111Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=162.225µs grafana | logger=migrator t=2025-06-20T07:46:14.423082977Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-20T07:46:14.4283382Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.253312ms grafana | logger=migrator t=2025-06-20T07:46:14.434213748Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-20T07:46:14.441452345Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=7.236687ms grafana | logger=migrator t=2025-06-20T07:46:14.45239202Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-20T07:46:14.452438871Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=43.971µs grafana | logger=migrator t=2025-06-20T07:46:14.456350188Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-20T07:46:14.457542329Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.190722ms grafana | logger=migrator t=2025-06-20T07:46:14.463034508Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-20T07:46:14.464508218Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.47467ms grafana | logger=migrator t=2025-06-20T07:46:14.471207579Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-20T07:46:14.472214556Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.006827ms grafana | logger=migrator t=2025-06-20T07:46:14.475620509Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-20T07:46:14.47640791Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=786.891µs grafana | logger=migrator t=2025-06-20T07:46:14.480206353Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-20T07:46:14.481037476Z level=info msg="Migration successfully executed" id="add index alert state" duration=830.634µs grafana | logger=migrator t=2025-06-20T07:46:14.486155643Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-20T07:46:14.487536501Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.375148ms grafana | logger=migrator t=2025-06-20T07:46:14.491242231Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-20T07:46:14.492353882Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.109331ms grafana | logger=migrator t=2025-06-20T07:46:14.497076739Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-20T07:46:14.497903742Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=826.833µs grafana | logger=migrator t=2025-06-20T07:46:14.502078605Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-20T07:46:14.503793271Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.718907ms grafana | logger=migrator t=2025-06-20T07:46:14.509415073Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-20T07:46:14.519802454Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.386381ms grafana | logger=migrator t=2025-06-20T07:46:14.52704735Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-20T07:46:14.527806791Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=758.48µs grafana | logger=migrator t=2025-06-20T07:46:14.533149235Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-20T07:46:14.534898613Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.748918ms grafana | logger=migrator t=2025-06-20T07:46:14.539272121Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:14.539830617Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=557.606µs grafana | logger=migrator t=2025-06-20T07:46:14.543952648Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-20T07:46:14.544559944Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=606.296µs grafana | logger=migrator t=2025-06-20T07:46:14.548293746Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-20T07:46:14.549154909Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=859.312µs grafana | logger=migrator t=2025-06-20T07:46:14.555561222Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-20T07:46:14.56507239Z level=info msg="Migration successfully executed" id="Add column is_default" duration=9.510448ms grafana | logger=migrator t=2025-06-20T07:46:14.56954481Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-20T07:46:14.573483687Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.938797ms grafana | logger=migrator t=2025-06-20T07:46:14.577379832Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-20T07:46:14.581481833Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.096471ms grafana | logger=migrator t=2025-06-20T07:46:14.588579885Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-20T07:46:14.592728308Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.147943ms grafana | logger=migrator t=2025-06-20T07:46:14.596482569Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-20T07:46:14.597414664Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=931.265µs grafana | logger=migrator t=2025-06-20T07:46:14.601589928Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-20T07:46:14.601628429Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=38.981µs grafana | logger=migrator t=2025-06-20T07:46:14.607336073Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-20T07:46:14.607521988Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=184.515µs grafana | logger=migrator t=2025-06-20T07:46:14.611385922Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-20T07:46:14.61277074Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.384118ms grafana | logger=migrator t=2025-06-20T07:46:14.61682387Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-20T07:46:14.618590428Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.765459ms grafana | logger=migrator t=2025-06-20T07:46:14.624639451Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-20T07:46:14.625684049Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.043748ms grafana | logger=migrator t=2025-06-20T07:46:14.629617446Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-20T07:46:14.630702125Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.084029ms grafana | logger=migrator t=2025-06-20T07:46:14.63461135Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-20T07:46:14.63568071Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.06936ms grafana | logger=migrator t=2025-06-20T07:46:14.66267284Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-20T07:46:14.66968061Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=7.00692ms grafana | logger=migrator t=2025-06-20T07:46:14.676219777Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-20T07:46:14.679108955Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.887579ms grafana | logger=migrator t=2025-06-20T07:46:14.683193986Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-20T07:46:14.683764421Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=569.515µs grafana | logger=migrator t=2025-06-20T07:46:14.690609536Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:14.691721986Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.11222ms grafana | logger=migrator t=2025-06-20T07:46:14.696605969Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-20T07:46:14.697590835Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=984.876µs grafana | logger=migrator t=2025-06-20T07:46:14.702227121Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-20T07:46:14.706105556Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.877714ms grafana | logger=migrator t=2025-06-20T07:46:14.717507234Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-20T07:46:14.717538575Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=32.811µs grafana | logger=migrator t=2025-06-20T07:46:14.725446319Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-20T07:46:14.726363223Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=916.774µs grafana | logger=migrator t=2025-06-20T07:46:14.731232745Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-20T07:46:14.73214698Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=912.645µs grafana | logger=migrator t=2025-06-20T07:46:14.743119217Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-20T07:46:14.743374094Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=254.297µs grafana | logger=migrator t=2025-06-20T07:46:14.750888567Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-20T07:46:14.752426749Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.537282ms grafana | logger=migrator t=2025-06-20T07:46:14.757896176Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-20T07:46:14.758826922Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=930.506µs grafana | logger=migrator t=2025-06-20T07:46:14.76464957Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-20T07:46:14.765537693Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=889.103µs grafana | logger=migrator t=2025-06-20T07:46:14.770741584Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-20T07:46:14.77168245Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=940.486µs grafana | logger=migrator t=2025-06-20T07:46:14.775348969Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-20T07:46:14.776821689Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.47247ms grafana | logger=migrator t=2025-06-20T07:46:14.783213822Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-20T07:46:14.78461405Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.399588ms grafana | logger=migrator t=2025-06-20T07:46:14.78870548Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-20T07:46:14.788741031Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=36.391µs grafana | logger=migrator t=2025-06-20T07:46:14.792725979Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.796957543Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.231134ms grafana | logger=migrator t=2025-06-20T07:46:14.80235662Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-20T07:46:14.803189283Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=832.193µs grafana | logger=migrator t=2025-06-20T07:46:14.806878462Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.811094606Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.215074ms grafana | logger=migrator t=2025-06-20T07:46:14.814767126Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-20T07:46:14.815418103Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=647.567µs grafana | logger=migrator t=2025-06-20T07:46:14.822569107Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-20T07:46:14.823982535Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.412338ms grafana | logger=migrator t=2025-06-20T07:46:14.828109637Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-20T07:46:14.828928989Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=819.222µs grafana | logger=migrator t=2025-06-20T07:46:14.832256239Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-20T07:46:14.844008577Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.751518ms grafana | logger=migrator t=2025-06-20T07:46:14.850969456Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-20T07:46:14.851722346Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=752.76µs grafana | logger=migrator t=2025-06-20T07:46:14.870863354Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-20T07:46:14.87257789Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.712996ms grafana | logger=migrator t=2025-06-20T07:46:14.878459Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-20T07:46:14.878895811Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=433.231µs grafana | logger=migrator t=2025-06-20T07:46:14.884192375Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-20T07:46:14.884810351Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=617.406µs grafana | logger=migrator t=2025-06-20T07:46:14.888267415Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-20T07:46:14.888516971Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=248.466µs grafana | logger=migrator t=2025-06-20T07:46:14.892026506Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.896417005Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.389839ms grafana | logger=migrator t=2025-06-20T07:46:14.903063745Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.908276456Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.211561ms grafana | logger=migrator t=2025-06-20T07:46:14.912323206Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.913564039Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.240493ms grafana | logger=migrator t=2025-06-20T07:46:14.917384762Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.918596375Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.211093ms grafana | logger=migrator t=2025-06-20T07:46:14.924151346Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-20T07:46:14.924554027Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=402.02µs grafana | logger=migrator t=2025-06-20T07:46:14.92911102Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-20T07:46:14.932627545Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.516065ms grafana | logger=migrator t=2025-06-20T07:46:14.936048047Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-20T07:46:14.937089375Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.040698ms grafana | logger=migrator t=2025-06-20T07:46:14.940654502Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-20T07:46:14.940981711Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=326.739µs grafana | logger=migrator t=2025-06-20T07:46:14.947020274Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-20T07:46:14.947674222Z level=info msg="Migration successfully executed" id="Move region to single row" duration=652.638µs grafana | logger=migrator t=2025-06-20T07:46:14.953237222Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.954158008Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=924.436µs grafana | logger=migrator t=2025-06-20T07:46:14.959415609Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.960316865Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=900.915µs grafana | logger=migrator t=2025-06-20T07:46:14.965362801Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.966264575Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=901.234µs grafana | logger=migrator t=2025-06-20T07:46:14.970499129Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.971416825Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=916.946µs grafana | logger=migrator t=2025-06-20T07:46:14.976490681Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.977307594Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=816.973µs grafana | logger=migrator t=2025-06-20T07:46:14.983569403Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-20T07:46:14.985049064Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.479011ms grafana | logger=migrator t=2025-06-20T07:46:14.989482193Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-20T07:46:14.989511314Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=29.981µs grafana | logger=migrator t=2025-06-20T07:46:14.994790267Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-20T07:46:14.994807737Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.08µs grafana | logger=migrator t=2025-06-20T07:46:14.999391612Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-20T07:46:14.999426323Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=36.451µs grafana | logger=migrator t=2025-06-20T07:46:15.006491573Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-20T07:46:15.007750087Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.258354ms grafana | logger=migrator t=2025-06-20T07:46:15.012166404Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-20T07:46:15.01350067Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.336306ms grafana | logger=migrator t=2025-06-20T07:46:15.019016148Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-20T07:46:15.020379964Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.362996ms grafana | logger=migrator t=2025-06-20T07:46:15.02472858Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-20T07:46:15.026112277Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.389177ms grafana | logger=migrator t=2025-06-20T07:46:15.031809379Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-20T07:46:15.032040775Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=231.666µs grafana | logger=migrator t=2025-06-20T07:46:15.038100256Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-20T07:46:15.038495918Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=391.722µs grafana | logger=migrator t=2025-06-20T07:46:15.042142645Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-20T07:46:15.042161435Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=19.12µs grafana | logger=migrator t=2025-06-20T07:46:15.046617335Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-20T07:46:15.052930433Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=6.313508ms grafana | logger=migrator t=2025-06-20T07:46:15.057291919Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-20T07:46:15.058218915Z level=info msg="Migration successfully executed" id="create team table" duration=928.775µs grafana | logger=migrator t=2025-06-20T07:46:15.082882222Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-20T07:46:15.08426513Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.382558ms grafana | logger=migrator t=2025-06-20T07:46:15.088971716Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-20T07:46:15.089837138Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=865.032µs grafana | logger=migrator t=2025-06-20T07:46:15.095476299Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-20T07:46:15.102649511Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.162102ms grafana | logger=migrator t=2025-06-20T07:46:15.109785431Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-20T07:46:15.110155811Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=369.85µs grafana | logger=migrator t=2025-06-20T07:46:15.114509037Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:15.115881344Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.371937ms grafana | logger=migrator t=2025-06-20T07:46:15.120067276Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-20T07:46:15.126027525Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.960339ms grafana | logger=migrator t=2025-06-20T07:46:15.133114624Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-20T07:46:15.13781972Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.688055ms grafana | logger=migrator t=2025-06-20T07:46:15.141742444Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-20T07:46:15.142529266Z level=info msg="Migration successfully executed" id="create team member table" duration=786.542µs grafana | logger=migrator t=2025-06-20T07:46:15.147097137Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-20T07:46:15.148086724Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=989.117µs grafana | logger=migrator t=2025-06-20T07:46:15.153730605Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-20T07:46:15.155827451Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.105007ms grafana | logger=migrator t=2025-06-20T07:46:15.162756486Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-20T07:46:15.16441776Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.660955ms grafana | logger=migrator t=2025-06-20T07:46:15.168821877Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-20T07:46:15.173731659Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.908962ms grafana | logger=migrator t=2025-06-20T07:46:15.177807617Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-20T07:46:15.182597915Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.789658ms grafana | logger=migrator t=2025-06-20T07:46:15.188155254Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-20T07:46:15.192915291Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.759557ms grafana | logger=migrator t=2025-06-20T07:46:15.198547231Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-20T07:46:15.199594169Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.046458ms grafana | logger=migrator t=2025-06-20T07:46:15.204276444Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-20T07:46:15.205187868Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=911.324µs grafana | logger=migrator t=2025-06-20T07:46:15.209230447Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-20T07:46:15.210984593Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.748116ms grafana | logger=migrator t=2025-06-20T07:46:15.215772001Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-20T07:46:15.216998964Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.227022ms grafana | logger=migrator t=2025-06-20T07:46:15.220823506Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-20T07:46:15.222042508Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.218522ms grafana | logger=migrator t=2025-06-20T07:46:15.228176232Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-20T07:46:15.230876024Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=2.699672ms grafana | logger=migrator t=2025-06-20T07:46:15.235491678Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-20T07:46:15.236835813Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.380656ms grafana | logger=migrator t=2025-06-20T07:46:15.241384955Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-20T07:46:15.24266799Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.283845ms grafana | logger=migrator t=2025-06-20T07:46:15.250673143Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-20T07:46:15.252176333Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.50153ms grafana | logger=migrator t=2025-06-20T07:46:15.258741628Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-20T07:46:15.259794837Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.056029ms grafana | logger=migrator t=2025-06-20T07:46:15.266638109Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-20T07:46:15.267154613Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=517.724µs grafana | logger=migrator t=2025-06-20T07:46:15.271336105Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-20T07:46:15.272346022Z level=info msg="Migration successfully executed" id="create tag table" duration=1.009687ms grafana | logger=migrator t=2025-06-20T07:46:15.276439971Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-20T07:46:15.277749316Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.308945ms grafana | logger=migrator t=2025-06-20T07:46:15.299095976Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-20T07:46:15.301002967Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.91212ms grafana | logger=migrator t=2025-06-20T07:46:15.307677415Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-20T07:46:15.309096913Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.419828ms grafana | logger=migrator t=2025-06-20T07:46:15.31309216Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-20T07:46:15.314314192Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.222992ms grafana | logger=migrator t=2025-06-20T07:46:15.320693762Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:15.337966834Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.274012ms grafana | logger=migrator t=2025-06-20T07:46:15.342889695Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-20T07:46:15.343703887Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=815.512µs grafana | logger=migrator t=2025-06-20T07:46:15.347644392Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-20T07:46:15.349169383Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.523981ms grafana | logger=migrator t=2025-06-20T07:46:15.356440557Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:15.357080754Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=641.437µs grafana | logger=migrator t=2025-06-20T07:46:15.361357269Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:15.362210561Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=853.293µs grafana | logger=migrator t=2025-06-20T07:46:15.366291861Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-20T07:46:15.367296097Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.004337ms grafana | logger=migrator t=2025-06-20T07:46:15.377305215Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-20T07:46:15.378435334Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.131949ms grafana | logger=migrator t=2025-06-20T07:46:15.382698038Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-20T07:46:15.382722649Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=26.431µs grafana | logger=migrator t=2025-06-20T07:46:15.386530351Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.391428912Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.897301ms grafana | logger=migrator t=2025-06-20T07:46:15.395440998Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.400056872Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.615114ms grafana | logger=migrator t=2025-06-20T07:46:15.408180469Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.416993254Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=8.808885ms grafana | logger=migrator t=2025-06-20T07:46:15.4209534Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.425000978Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.046628ms grafana | logger=migrator t=2025-06-20T07:46:15.431997075Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.433047462Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.050598ms grafana | logger=migrator t=2025-06-20T07:46:15.439430453Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.448399633Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.96868ms grafana | logger=migrator t=2025-06-20T07:46:15.452561593Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-20T07:46:15.458591765Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=6.029192ms grafana | logger=migrator t=2025-06-20T07:46:15.462650073Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-20T07:46:15.46327895Z level=info msg="Migration successfully executed" id="create server_lock table" duration=628.507µs grafana | logger=migrator t=2025-06-20T07:46:15.469675421Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-20T07:46:15.471487759Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.812508ms grafana | logger=migrator t=2025-06-20T07:46:15.476797891Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-20T07:46:15.47825815Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.463019ms grafana | logger=migrator t=2025-06-20T07:46:15.501591703Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-20T07:46:15.504211323Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.61927ms grafana | logger=migrator t=2025-06-20T07:46:15.508298202Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-20T07:46:15.509432973Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.13392ms grafana | logger=migrator t=2025-06-20T07:46:15.51346829Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-20T07:46:15.514606751Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.138081ms grafana | logger=migrator t=2025-06-20T07:46:15.518472964Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-20T07:46:15.52472105Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.246806ms grafana | logger=migrator t=2025-06-20T07:46:15.530276829Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-20T07:46:15.531661636Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.384657ms grafana | logger=migrator t=2025-06-20T07:46:15.53672271Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-20T07:46:15.543084711Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=6.361191ms grafana | logger=migrator t=2025-06-20T07:46:15.546973855Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-20T07:46:15.5479422Z level=info msg="Migration successfully executed" id="create cache_data table" duration=968.016µs grafana | logger=migrator t=2025-06-20T07:46:15.553272972Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-20T07:46:15.555074281Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.800969ms grafana | logger=migrator t=2025-06-20T07:46:15.562603822Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-20T07:46:15.563766153Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.166821ms grafana | logger=migrator t=2025-06-20T07:46:15.568864689Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-20T07:46:15.569919077Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.054508ms grafana | logger=migrator t=2025-06-20T07:46:15.573699449Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-20T07:46:15.573719519Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=20.75µs grafana | logger=migrator t=2025-06-20T07:46:15.58014052Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-20T07:46:15.580427998Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=281.888µs grafana | logger=migrator t=2025-06-20T07:46:15.588853943Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-20T07:46:15.591819852Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=2.964889ms grafana | logger=migrator t=2025-06-20T07:46:15.596329623Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-20T07:46:15.597519864Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.189811ms grafana | logger=migrator t=2025-06-20T07:46:15.601520362Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-20T07:46:15.602759944Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.239252ms grafana | logger=migrator t=2025-06-20T07:46:15.608658222Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-20T07:46:15.608680862Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=23µs grafana | logger=migrator t=2025-06-20T07:46:15.612556596Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-20T07:46:15.613720307Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.162801ms grafana | logger=migrator t=2025-06-20T07:46:15.617433536Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-20T07:46:15.61909248Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.658384ms grafana | logger=migrator t=2025-06-20T07:46:15.624973067Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-20T07:46:15.62621139Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.237443ms grafana | logger=migrator t=2025-06-20T07:46:15.630210427Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-20T07:46:15.631337377Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.12646ms grafana | logger=migrator t=2025-06-20T07:46:15.637130842Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-20T07:46:15.646461031Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.329389ms grafana | logger=migrator t=2025-06-20T07:46:15.651216978Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-20T07:46:15.652873602Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.653094ms grafana | logger=migrator t=2025-06-20T07:46:15.658887123Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-20T07:46:15.659083798Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=195.805µs grafana | logger=migrator t=2025-06-20T07:46:15.664984015Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-20T07:46:15.667484512Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=2.492087ms grafana | logger=migrator t=2025-06-20T07:46:15.674398968Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-20T07:46:15.675598769Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.199692ms grafana | logger=migrator t=2025-06-20T07:46:15.679249926Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-20T07:46:15.680283525Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.033498ms grafana | logger=migrator t=2025-06-20T07:46:15.685865784Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-20T07:46:15.685885914Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=23.861µs grafana | logger=migrator t=2025-06-20T07:46:15.691245747Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-20T07:46:15.692929852Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.683365ms grafana | logger=migrator t=2025-06-20T07:46:15.697146855Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-20T07:46:15.699005144Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.852069ms grafana | logger=migrator t=2025-06-20T07:46:15.719687797Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-20T07:46:15.720777105Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.089058ms grafana | logger=migrator t=2025-06-20T07:46:15.725644046Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-20T07:46:15.727373672Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.729177ms grafana | logger=migrator t=2025-06-20T07:46:15.731483871Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.738744926Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.261365ms grafana | logger=migrator t=2025-06-20T07:46:15.746598205Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.747657603Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.055258ms grafana | logger=migrator t=2025-06-20T07:46:15.75204022Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.75313342Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.09271ms grafana | logger=migrator t=2025-06-20T07:46:15.758569595Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.786320586Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.749621ms grafana | logger=migrator t=2025-06-20T07:46:15.792289235Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.815754481Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.470186ms grafana | logger=migrator t=2025-06-20T07:46:15.82018658Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.820901149Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=714.339µs grafana | logger=migrator t=2025-06-20T07:46:15.826301973Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.827813533Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.51088ms grafana | logger=migrator t=2025-06-20T07:46:15.833590977Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-20T07:46:15.843281167Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.69609ms grafana | logger=migrator t=2025-06-20T07:46:15.848202888Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-20T07:46:15.852367419Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.164261ms grafana | logger=migrator t=2025-06-20T07:46:15.856978102Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:15.858070441Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.091689ms grafana | logger=migrator t=2025-06-20T07:46:15.864872993Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-20T07:46:15.866803124Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.928991ms grafana | logger=migrator t=2025-06-20T07:46:15.871787928Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-20T07:46:15.872793754Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.005466ms grafana | logger=migrator t=2025-06-20T07:46:15.876860133Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-20T07:46:15.877845439Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=984.646µs grafana | logger=migrator t=2025-06-20T07:46:15.883450439Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-20T07:46:15.88347812Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=29.231µs grafana | logger=migrator t=2025-06-20T07:46:15.888771641Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.896092567Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.322646ms grafana | logger=migrator t=2025-06-20T07:46:15.90034476Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.906463423Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.118173ms grafana | logger=migrator t=2025-06-20T07:46:15.92806097Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.936850595Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=8.789905ms grafana | logger=migrator t=2025-06-20T07:46:15.942366252Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-20T07:46:15.943266347Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=899.915µs grafana | logger=migrator t=2025-06-20T07:46:15.94715341Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-20T07:46:15.948193307Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.039617ms grafana | logger=migrator t=2025-06-20T07:46:15.954492846Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.963866336Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.37356ms grafana | logger=migrator t=2025-06-20T07:46:15.967959995Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.97410521Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.144455ms grafana | logger=migrator t=2025-06-20T07:46:15.978380504Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-20T07:46:15.979561346Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.180522ms grafana | logger=migrator t=2025-06-20T07:46:15.986812149Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:15.996032815Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.225876ms grafana | logger=migrator t=2025-06-20T07:46:16.000143255Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:16.006223397Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.079702ms grafana | logger=migrator t=2025-06-20T07:46:16.013621065Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:16.013638236Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=17.731µs grafana | logger=migrator t=2025-06-20T07:46:16.019778769Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:16.021430394Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.647525ms grafana | logger=migrator t=2025-06-20T07:46:16.0258121Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-20T07:46:16.026851248Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.038798ms grafana | logger=migrator t=2025-06-20T07:46:16.030981039Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-20T07:46:16.032809058Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.82627ms grafana | logger=migrator t=2025-06-20T07:46:16.038388967Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-20T07:46:16.038423008Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=35.251µs grafana | logger=migrator t=2025-06-20T07:46:16.042749413Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:16.050198522Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.44844ms grafana | logger=migrator t=2025-06-20T07:46:16.054851165Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:16.061595116Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.743671ms grafana | logger=migrator t=2025-06-20T07:46:16.070338379Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:16.077488541Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.150412ms grafana | logger=migrator t=2025-06-20T07:46:16.081631061Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:16.088249947Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.618136ms grafana | logger=migrator t=2025-06-20T07:46:16.094258769Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-20T07:46:16.107705547Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=13.402758ms grafana | logger=migrator t=2025-06-20T07:46:16.114071437Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:16.114090588Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=20.101µs grafana | logger=migrator t=2025-06-20T07:46:16.118505075Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-20T07:46:16.119844942Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.338656ms grafana | logger=migrator t=2025-06-20T07:46:16.136797974Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.146807121Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.009317ms grafana | logger=migrator t=2025-06-20T07:46:16.15201193Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-20T07:46:16.152031901Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=20.561µs grafana | logger=migrator t=2025-06-20T07:46:16.156016117Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.162544931Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.528024ms grafana | logger=migrator t=2025-06-20T07:46:16.167117354Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-20T07:46:16.168689205Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.570111ms grafana | logger=migrator t=2025-06-20T07:46:16.174498Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.181809346Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.309406ms grafana | logger=migrator t=2025-06-20T07:46:16.1898162Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-20T07:46:16.191476804Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.660914ms grafana | logger=migrator t=2025-06-20T07:46:16.1980706Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-20T07:46:16.199370445Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.302265ms grafana | logger=migrator t=2025-06-20T07:46:16.205229431Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.212356502Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.124481ms grafana | logger=migrator t=2025-06-20T07:46:16.216959894Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-20T07:46:16.21789177Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=934.566µs grafana | logger=migrator t=2025-06-20T07:46:16.224164777Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-20T07:46:16.2257963Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.634543ms grafana | logger=migrator t=2025-06-20T07:46:16.230452474Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-20T07:46:16.231286146Z level=info msg="Migration successfully executed" id="create alert_image table" duration=833.352µs grafana | logger=migrator t=2025-06-20T07:46:16.235466459Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-20T07:46:16.236461095Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=994.346µs grafana | logger=migrator t=2025-06-20T07:46:16.241982932Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-20T07:46:16.242009683Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=28.001µs grafana | logger=migrator t=2025-06-20T07:46:16.249216846Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-20T07:46:16.250806408Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.552401ms grafana | logger=migrator t=2025-06-20T07:46:16.256021127Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.257724183Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.703066ms grafana | logger=migrator t=2025-06-20T07:46:16.262990044Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-20T07:46:16.263372514Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-20T07:46:16.269842006Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-20T07:46:16.271793909Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=1.946782ms grafana | logger=migrator t=2025-06-20T07:46:16.278031195Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-20T07:46:16.279625448Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.596353ms grafana | logger=migrator t=2025-06-20T07:46:16.283749958Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-20T07:46:16.290702154Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.950946ms grafana | logger=migrator t=2025-06-20T07:46:16.29544096Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-20T07:46:16.296308103Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=864.103µs grafana | logger=migrator t=2025-06-20T07:46:16.310270656Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-20T07:46:16.312257869Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.988193ms grafana | logger=migrator t=2025-06-20T07:46:16.3171525Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-20T07:46:16.318740092Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.588203ms grafana | logger=migrator t=2025-06-20T07:46:16.338361336Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-20T07:46:16.340271737Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.917701ms grafana | logger=migrator t=2025-06-20T07:46:16.347433968Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:16.348441505Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.007067ms grafana | logger=migrator t=2025-06-20T07:46:16.352643408Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-20T07:46:16.352667988Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=25.16µs grafana | logger=migrator t=2025-06-20T07:46:16.35911596Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-20T07:46:16.359140961Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=26.241µs grafana | logger=migrator t=2025-06-20T07:46:16.365407128Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-20T07:46:16.376367791Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.962403ms grafana | logger=migrator t=2025-06-20T07:46:16.380560983Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-20T07:46:16.38084013Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=277.917µs grafana | logger=migrator t=2025-06-20T07:46:16.38496819Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-20T07:46:16.387161479Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=2.192699ms grafana | logger=migrator t=2025-06-20T07:46:16.392943153Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-20T07:46:16.393495328Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=552.835µs grafana | logger=migrator t=2025-06-20T07:46:16.399116678Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-20T07:46:16.400803393Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.686025ms grafana | logger=migrator t=2025-06-20T07:46:16.405329225Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-20T07:46:16.406784643Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.454849ms grafana | logger=migrator t=2025-06-20T07:46:16.415721011Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-20T07:46:16.450957003Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.235832ms grafana | logger=migrator t=2025-06-20T07:46:16.455274477Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-20T07:46:16.46060103Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.325943ms grafana | logger=migrator t=2025-06-20T07:46:16.46472621Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-20T07:46:16.464974027Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=244.997µs grafana | logger=migrator t=2025-06-20T07:46:16.468953633Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-20T07:46:16.505156559Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=36.202406ms grafana | logger=migrator t=2025-06-20T07:46:16.511245972Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-20T07:46:16.545022414Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.775832ms grafana | logger=migrator t=2025-06-20T07:46:16.562203383Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-20T07:46:16.563758884Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.550651ms grafana | logger=migrator t=2025-06-20T07:46:16.568160232Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-20T07:46:16.569278052Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.11409ms grafana | logger=migrator t=2025-06-20T07:46:16.57596551Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-20T07:46:16.577376918Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=1.410538ms grafana | logger=migrator t=2025-06-20T07:46:16.584690704Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-20T07:46:16.586097751Z level=info msg="Migration successfully executed" id="create permission table" duration=1.410117ms grafana | logger=migrator t=2025-06-20T07:46:16.591088354Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-20T07:46:16.592783539Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.694705ms grafana | logger=migrator t=2025-06-20T07:46:16.598864732Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-20T07:46:16.59994057Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.075528ms grafana | logger=migrator t=2025-06-20T07:46:16.605204541Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-20T07:46:16.606172857Z level=info msg="Migration successfully executed" id="create role table" duration=967.896µs grafana | logger=migrator t=2025-06-20T07:46:16.610580535Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-20T07:46:16.618095985Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.51452ms grafana | logger=migrator t=2025-06-20T07:46:16.623802878Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-20T07:46:16.63140219Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.603062ms grafana | logger=migrator t=2025-06-20T07:46:16.638027658Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-20T07:46:16.639122987Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.095089ms grafana | logger=migrator t=2025-06-20T07:46:16.643413102Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-20T07:46:16.644605873Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.192021ms grafana | logger=migrator t=2025-06-20T07:46:16.650319966Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:16.652193026Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.87262ms grafana | logger=migrator t=2025-06-20T07:46:16.657009934Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-20T07:46:16.658885374Z level=info msg="Migration successfully executed" id="create team role table" duration=1.8736ms grafana | logger=migrator t=2025-06-20T07:46:16.666448957Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-20T07:46:16.667688469Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.239182ms grafana | logger=migrator t=2025-06-20T07:46:16.673275698Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-20T07:46:16.674751358Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.47411ms grafana | logger=migrator t=2025-06-20T07:46:16.683623885Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-20T07:46:16.685800133Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.175588ms grafana | logger=migrator t=2025-06-20T07:46:16.69058249Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-20T07:46:16.692303077Z level=info msg="Migration successfully executed" id="create user role table" duration=1.720247ms grafana | logger=migrator t=2025-06-20T07:46:16.697482585Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-20T07:46:16.698662106Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.178681ms grafana | logger=migrator t=2025-06-20T07:46:16.702101208Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-20T07:46:16.703251109Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.148961ms grafana | logger=migrator t=2025-06-20T07:46:16.711436588Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-20T07:46:16.712719672Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.282744ms grafana | logger=migrator t=2025-06-20T07:46:16.717543841Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-20T07:46:16.718491666Z level=info msg="Migration successfully executed" id="create builtin role table" duration=947.145µs grafana | logger=migrator t=2025-06-20T07:46:16.723055448Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-20T07:46:16.725155744Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=2.099186ms grafana | logger=migrator t=2025-06-20T07:46:16.729806718Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-20T07:46:16.73097969Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.172462ms grafana | logger=migrator t=2025-06-20T07:46:16.735966203Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-20T07:46:16.744220983Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.25391ms grafana | logger=migrator t=2025-06-20T07:46:16.762356617Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-20T07:46:16.764186106Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.829509ms grafana | logger=migrator t=2025-06-20T07:46:16.76883504Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-20T07:46:16.770091474Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.256584ms grafana | logger=migrator t=2025-06-20T07:46:16.774208944Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:16.775463638Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.253965ms grafana | logger=migrator t=2025-06-20T07:46:16.780245305Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-20T07:46:16.78157061Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.325186ms grafana | logger=migrator t=2025-06-20T07:46:16.785401912Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-20T07:46:16.786243816Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=841.394µs grafana | logger=migrator t=2025-06-20T07:46:16.791799784Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-20T07:46:16.793099648Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.300055ms grafana | logger=migrator t=2025-06-20T07:46:16.799486588Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-20T07:46:16.807535103Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.048025ms grafana | logger=migrator t=2025-06-20T07:46:16.811431118Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-20T07:46:16.819439781Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.007953ms grafana | logger=migrator t=2025-06-20T07:46:16.825127523Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-20T07:46:16.833181008Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.053015ms grafana | logger=migrator t=2025-06-20T07:46:16.83735675Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-20T07:46:16.845390705Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.033315ms grafana | logger=migrator t=2025-06-20T07:46:16.852414992Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-20T07:46:16.853779929Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.365007ms grafana | logger=migrator t=2025-06-20T07:46:16.858044912Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-20T07:46:16.85981781Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.772088ms grafana | logger=migrator t=2025-06-20T07:46:16.866862507Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-20T07:46:16.867926766Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.063649ms grafana | logger=migrator t=2025-06-20T07:46:16.872655922Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-20T07:46:16.88303651Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=10.381137ms grafana | logger=migrator t=2025-06-20T07:46:16.888006852Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-20T07:46:16.889193174Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.185762ms grafana | logger=migrator t=2025-06-20T07:46:16.894551637Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-20T07:46:16.895612586Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.060629ms grafana | logger=migrator t=2025-06-20T07:46:16.899604252Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-20T07:46:16.900495885Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=890.923µs grafana | logger=migrator t=2025-06-20T07:46:16.90814887Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-20T07:46:16.909922028Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.772038ms grafana | logger=migrator t=2025-06-20T07:46:16.915491237Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-20T07:46:16.915510397Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=19.831µs grafana | logger=migrator t=2025-06-20T07:46:16.919581615Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-20T07:46:16.920435338Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=853.443µs grafana | logger=migrator t=2025-06-20T07:46:16.924690022Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-20T07:46:16.924801395Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=112.373µs grafana | logger=migrator t=2025-06-20T07:46:16.930845526Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-20T07:46:16.931628487Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=782.801µs grafana | logger=migrator t=2025-06-20T07:46:16.935952483Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-20T07:46:16.936970489Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.018836ms grafana | logger=migrator t=2025-06-20T07:46:16.941589683Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-20T07:46:16.94221956Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=629.657µs grafana | logger=migrator t=2025-06-20T07:46:16.946185946Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-20T07:46:16.946425452Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=239.406µs grafana | logger=migrator t=2025-06-20T07:46:16.951591891Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-20T07:46:16.95309493Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.502879ms grafana | logger=migrator t=2025-06-20T07:46:16.965890572Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-20T07:46:16.966731434Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=840.712µs grafana | logger=migrator t=2025-06-20T07:46:16.971299827Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-20T07:46:16.972411986Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.112009ms grafana | logger=migrator t=2025-06-20T07:46:16.97740596Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-20T07:46:16.985596758Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.190378ms grafana | logger=migrator t=2025-06-20T07:46:16.993590452Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-20T07:46:16.993608853Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=19.18µs grafana | logger=migrator t=2025-06-20T07:46:16.998095532Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-20T07:46:16.999080658Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=985.006µs grafana | logger=migrator t=2025-06-20T07:46:17.003188138Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-20T07:46:17.005051408Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.029417ms grafana | logger=migrator t=2025-06-20T07:46:17.017848759Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-20T07:46:17.019632687Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.787198ms grafana | logger=migrator t=2025-06-20T07:46:17.023778577Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-20T07:46:17.033081216Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.302389ms grafana | logger=migrator t=2025-06-20T07:46:17.036923689Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.037690389Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=766.62µs grafana | logger=migrator t=2025-06-20T07:46:17.044160922Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.046069093Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.90758ms grafana | logger=migrator t=2025-06-20T07:46:17.050519492Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:17.072226102Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.628218ms grafana | logger=migrator t=2025-06-20T07:46:17.075845548Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-20T07:46:17.076771802Z level=info msg="Migration successfully executed" id="create correlation v2" duration=925.304µs grafana | logger=migrator t=2025-06-20T07:46:17.082528716Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.084438777Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.909621ms grafana | logger=migrator t=2025-06-20T07:46:17.088833035Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.090749606Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.916101ms grafana | logger=migrator t=2025-06-20T07:46:17.096937271Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-20T07:46:17.098119213Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.181832ms grafana | logger=migrator t=2025-06-20T07:46:17.104465942Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:17.105035717Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=570.135µs grafana | logger=migrator t=2025-06-20T07:46:17.109566918Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:17.110501203Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=934.065µs grafana | logger=migrator t=2025-06-20T07:46:17.114562082Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-20T07:46:17.121290101Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.725229ms grafana | logger=migrator t=2025-06-20T07:46:17.128187416Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-20T07:46:17.136998801Z level=info msg="Migration successfully executed" id="add type column" duration=8.810795ms grafana | logger=migrator t=2025-06-20T07:46:17.141240524Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-20T07:46:17.142170309Z level=info msg="Migration successfully executed" id="create entity_events table" duration=929.325µs grafana | logger=migrator t=2025-06-20T07:46:17.146237267Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-20T07:46:17.147358397Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.12074ms grafana | logger=migrator t=2025-06-20T07:46:17.152585737Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.161239578Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.183361688Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.183744319Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.188895316Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-20T07:46:17.189791131Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=894.595µs grafana | logger=migrator t=2025-06-20T07:46:17.19538671Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-20T07:46:17.196841379Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.453798ms grafana | logger=migrator t=2025-06-20T07:46:17.201423431Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.203219649Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.793387ms grafana | logger=migrator t=2025-06-20T07:46:17.209935889Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-20T07:46:17.211240403Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.303784ms grafana | logger=migrator t=2025-06-20T07:46:17.216838042Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.218124227Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.289695ms grafana | logger=migrator t=2025-06-20T07:46:17.22236056Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.223359787Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=999.227µs grafana | logger=migrator t=2025-06-20T07:46:17.228855814Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-20T07:46:17.230116677Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.258363ms grafana | logger=migrator t=2025-06-20T07:46:17.234408451Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-20T07:46:17.236088137Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.682415ms grafana | logger=migrator t=2025-06-20T07:46:17.240796853Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.242729833Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.928711ms grafana | logger=migrator t=2025-06-20T07:46:17.248442797Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:17.249718371Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.273453ms grafana | logger=migrator t=2025-06-20T07:46:17.255452684Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-20T07:46:17.257489838Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.036764ms grafana | logger=migrator t=2025-06-20T07:46:17.262321147Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-20T07:46:17.28377059Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.448783ms grafana | logger=migrator t=2025-06-20T07:46:17.287950202Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-20T07:46:17.294296461Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.345849ms grafana | logger=migrator t=2025-06-20T07:46:17.299606383Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-20T07:46:17.308966022Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.356439ms grafana | logger=migrator t=2025-06-20T07:46:17.316103803Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-20T07:46:17.316809672Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=708.849µs grafana | logger=migrator t=2025-06-20T07:46:17.321072795Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-20T07:46:17.330625311Z level=info msg="Migration successfully executed" id="add share column" duration=9.551836ms grafana | logger=migrator t=2025-06-20T07:46:17.336010855Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-20T07:46:17.336420486Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=406.611µs grafana | logger=migrator t=2025-06-20T07:46:17.342686813Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-20T07:46:17.343758192Z level=info msg="Migration successfully executed" id="create file table" duration=1.065049ms grafana | logger=migrator t=2025-06-20T07:46:17.347948983Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-20T07:46:17.349909586Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.959383ms grafana | logger=migrator t=2025-06-20T07:46:17.354852437Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-20T07:46:17.356160463Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.309926ms grafana | logger=migrator t=2025-06-20T07:46:17.361150476Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-20T07:46:17.36204303Z level=info msg="Migration successfully executed" id="create file_meta table" duration=891.904µs grafana | logger=migrator t=2025-06-20T07:46:17.365999215Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-20T07:46:17.367150716Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.151111ms grafana | logger=migrator t=2025-06-20T07:46:17.374177973Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-20T07:46:17.374204844Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=28.461µs grafana | logger=migrator t=2025-06-20T07:46:17.395173085Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-20T07:46:17.395218896Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=48.502µs grafana | logger=migrator t=2025-06-20T07:46:17.402387317Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-20T07:46:17.403369673Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=976.716µs grafana | logger=migrator t=2025-06-20T07:46:17.408765018Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-20T07:46:17.409393944Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=628.386µs grafana | logger=migrator t=2025-06-20T07:46:17.417354347Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-20T07:46:17.41970893Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.351363ms grafana | logger=migrator t=2025-06-20T07:46:17.426548162Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-20T07:46:17.435874911Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.326389ms grafana | logger=migrator t=2025-06-20T07:46:17.4418168Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-20T07:46:17.441988465Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=171.225µs grafana | logger=migrator t=2025-06-20T07:46:17.448384075Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-20T07:46:17.450396729Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.035394ms grafana | logger=migrator t=2025-06-20T07:46:17.456971005Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-20T07:46:17.458109955Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.13926ms grafana | logger=migrator t=2025-06-20T07:46:17.462286767Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-20T07:46:17.462728508Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=441.661µs grafana | logger=migrator t=2025-06-20T07:46:17.470329552Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-20T07:46:17.471310637Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=980.066µs grafana | logger=migrator t=2025-06-20T07:46:17.475745996Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-20T07:46:17.487766207Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.020391ms grafana | logger=migrator t=2025-06-20T07:46:17.494078195Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-20T07:46:17.50137646Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.295635ms grafana | logger=migrator t=2025-06-20T07:46:17.506500627Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-20T07:46:17.507633747Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.13453ms grafana | logger=migrator t=2025-06-20T07:46:17.514088459Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-20T07:46:17.589667297Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.559328ms grafana | logger=migrator t=2025-06-20T07:46:17.612263151Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-20T07:46:17.614692506Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.434486ms grafana | logger=migrator t=2025-06-20T07:46:17.619586987Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-20T07:46:17.621505788Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.918881ms grafana | logger=migrator t=2025-06-20T07:46:17.628258968Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-20T07:46:17.6579144Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=29.650362ms grafana | logger=migrator t=2025-06-20T07:46:17.663719615Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-20T07:46:17.671241585Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.52296ms grafana | logger=migrator t=2025-06-20T07:46:17.676242489Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-20T07:46:17.676740542Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=501.403µs grafana | logger=migrator t=2025-06-20T07:46:17.682373403Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-20T07:46:17.682541067Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=168.004µs grafana | logger=migrator t=2025-06-20T07:46:17.687075529Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-20T07:46:17.687240913Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=165.264µs grafana | logger=migrator t=2025-06-20T07:46:17.69499746Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-20T07:46:17.695413601Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=416.701µs grafana | logger=migrator t=2025-06-20T07:46:17.706744344Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-20T07:46:17.707138444Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=395.04µs grafana | logger=migrator t=2025-06-20T07:46:17.711794169Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-20T07:46:17.713217166Z level=info msg="Migration successfully executed" id="create folder table" duration=1.423727ms grafana | logger=migrator t=2025-06-20T07:46:17.717804149Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-20T07:46:17.719045472Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.241293ms grafana | logger=migrator t=2025-06-20T07:46:17.7249234Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-20T07:46:17.72720176Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.273969ms grafana | logger=migrator t=2025-06-20T07:46:17.733285443Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-20T07:46:17.733316474Z level=info msg="Migration successfully executed" id="Update folder title length" duration=33.001µs grafana | logger=migrator t=2025-06-20T07:46:17.739471258Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-20T07:46:17.740828804Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.357316ms grafana | logger=migrator t=2025-06-20T07:46:17.748049377Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-20T07:46:17.75038908Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.335892ms grafana | logger=migrator t=2025-06-20T07:46:17.757209451Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-20T07:46:17.758713762Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.503501ms grafana | logger=migrator t=2025-06-20T07:46:17.766257593Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-20T07:46:17.767175398Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=916.855µs grafana | logger=migrator t=2025-06-20T07:46:17.774671637Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-20T07:46:17.775262484Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=589.957µs grafana | logger=migrator t=2025-06-20T07:46:17.787835409Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-20T07:46:17.789863503Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.027214ms grafana | logger=migrator t=2025-06-20T07:46:17.816062543Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-20T07:46:17.818015975Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.950472ms grafana | logger=migrator t=2025-06-20T07:46:17.823143612Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-20T07:46:17.824811096Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.667414ms grafana | logger=migrator t=2025-06-20T07:46:17.830037456Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-20T07:46:17.831228798Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.189782ms grafana | logger=migrator t=2025-06-20T07:46:17.83953265Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-20T07:46:17.841400709Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.867189ms grafana | logger=migrator t=2025-06-20T07:46:17.849157877Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-20T07:46:17.850204754Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.046817ms grafana | logger=migrator t=2025-06-20T07:46:17.857608482Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-20T07:46:17.859074621Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.466859ms grafana | logger=migrator t=2025-06-20T07:46:17.86539255Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-20T07:46:17.867655851Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.262841ms grafana | logger=migrator t=2025-06-20T07:46:17.877255617Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-20T07:46:17.879116357Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.86126ms grafana | logger=migrator t=2025-06-20T07:46:17.886673388Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-20T07:46:17.887891091Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.221773ms grafana | logger=migrator t=2025-06-20T07:46:17.89310224Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-20T07:46:17.89423377Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.13149ms grafana | logger=migrator t=2025-06-20T07:46:17.901485724Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-20T07:46:17.902638144Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.15251ms grafana | logger=migrator t=2025-06-20T07:46:17.908655625Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-20T07:46:17.908960343Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=305.518µs grafana | logger=migrator t=2025-06-20T07:46:17.91334265Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-20T07:46:17.926372719Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.026019ms grafana | logger=migrator t=2025-06-20T07:46:17.936067107Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-20T07:46:17.936841048Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=775.081µs grafana | logger=migrator t=2025-06-20T07:46:17.942889299Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-20T07:46:17.94291764Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=29.751µs grafana | logger=migrator t=2025-06-20T07:46:17.947079651Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-20T07:46:17.94887511Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.794798ms grafana | logger=migrator t=2025-06-20T07:46:17.954372876Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-20T07:46:17.954393537Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.212µs grafana | logger=migrator t=2025-06-20T07:46:17.959278787Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-20T07:46:17.960796118Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.517931ms grafana | logger=migrator t=2025-06-20T07:46:17.966589772Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-20T07:46:17.968287707Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.696835ms grafana | logger=migrator t=2025-06-20T07:46:17.972378997Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-20T07:46:17.973471056Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.091568ms grafana | logger=migrator t=2025-06-20T07:46:17.978341166Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-20T07:46:17.97997332Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.628694ms grafana | logger=migrator t=2025-06-20T07:46:17.985805625Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-20T07:46:17.986518284Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=714.739µs grafana | logger=migrator t=2025-06-20T07:46:17.99049441Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-20T07:46:17.990744887Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=251.107µs grafana | logger=migrator t=2025-06-20T07:46:17.998101553Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-20T07:46:17.999246145Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.109091ms grafana | logger=migrator t=2025-06-20T07:46:18.004818163Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-20T07:46:18.00621714Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.398497ms grafana | logger=migrator t=2025-06-20T07:46:18.027628002Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-20T07:46:18.029261265Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.631463ms grafana | logger=migrator t=2025-06-20T07:46:18.035989756Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-20T07:46:18.045718535Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.727939ms grafana | logger=migrator t=2025-06-20T07:46:18.05002105Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-20T07:46:18.059600766Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.578886ms grafana | logger=migrator t=2025-06-20T07:46:18.063923301Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-20T07:46:18.070861936Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=6.936755ms grafana | logger=migrator t=2025-06-20T07:46:18.076588969Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-20T07:46:18.086415032Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.827043ms grafana | logger=migrator t=2025-06-20T07:46:18.092938566Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-20T07:46:18.093139241Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=200.705µs grafana | logger=migrator t=2025-06-20T07:46:18.0979413Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-20T07:46:18.099205003Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.263123ms grafana | logger=migrator t=2025-06-20T07:46:18.103431636Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-20T07:46:18.116460384Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=13.030627ms grafana | logger=migrator t=2025-06-20T07:46:18.122702001Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-20T07:46:18.122902676Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=200.735µs grafana | logger=migrator t=2025-06-20T07:46:18.127838377Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-20T07:46:18.129182774Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.342757ms grafana | logger=migrator t=2025-06-20T07:46:18.133250622Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:18.156856503Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=23.604791ms grafana | logger=migrator t=2025-06-20T07:46:18.1623667Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-20T07:46:18.163052568Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=685.918µs grafana | logger=migrator t=2025-06-20T07:46:18.169737506Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:18.17099597Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.258194ms grafana | logger=migrator t=2025-06-20T07:46:18.182741744Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:18.183215656Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=471.612µs grafana | logger=migrator t=2025-06-20T07:46:18.188706763Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:18.189655228Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=948.375µs grafana | logger=migrator t=2025-06-20T07:46:18.193637225Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-20T07:46:18.220230744Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.592179ms grafana | logger=migrator t=2025-06-20T07:46:18.227822627Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-20T07:46:18.228636079Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=815.002µs grafana | logger=migrator t=2025-06-20T07:46:18.236180881Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-20T07:46:18.237553718Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.374208ms grafana | logger=migrator t=2025-06-20T07:46:18.241800001Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-20T07:46:18.242265313Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=466.383µs grafana | logger=migrator t=2025-06-20T07:46:18.247577775Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-20T07:46:18.249215229Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.638684ms grafana | logger=migrator t=2025-06-20T07:46:18.256794071Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-20T07:46:18.269896661Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=13.097051ms grafana | logger=migrator t=2025-06-20T07:46:18.274085853Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-20T07:46:18.281428549Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.333415ms grafana | logger=migrator t=2025-06-20T07:46:18.289296359Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-20T07:46:18.299538543Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.236123ms grafana | logger=migrator t=2025-06-20T07:46:18.305621075Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-20T07:46:18.316582348Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=10.959362ms grafana | logger=migrator t=2025-06-20T07:46:18.322513746Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-20T07:46:18.329273416Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=6.76086ms grafana | logger=migrator t=2025-06-20T07:46:18.333412017Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-20T07:46:18.342961422Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.544124ms grafana | logger=migrator t=2025-06-20T07:46:18.354585082Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-20T07:46:18.356219906Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.636204ms grafana | logger=migrator t=2025-06-20T07:46:18.360677355Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-20T07:46:18.400025146Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=39.32889ms grafana | logger=migrator t=2025-06-20T07:46:18.408216645Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-20T07:46:18.421243052Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=13.027167ms grafana | logger=migrator t=2025-06-20T07:46:18.43913623Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-20T07:46:18.450506134Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.367994ms grafana | logger=migrator t=2025-06-20T07:46:18.456865683Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-20T07:46:18.464698433Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=7.83131ms grafana | logger=migrator t=2025-06-20T07:46:18.471754001Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-20T07:46:18.481502091Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.74723ms grafana | logger=migrator t=2025-06-20T07:46:18.486866825Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-20T07:46:18.486888495Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=22.83µs grafana | logger=migrator t=2025-06-20T07:46:18.494715454Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-20T07:46:18.494734724Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=20.381µs grafana | logger=migrator t=2025-06-20T07:46:18.505020109Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:18.516948827Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.933158ms grafana | logger=migrator t=2025-06-20T07:46:18.523687488Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.533207602Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.518633ms grafana | logger=migrator t=2025-06-20T07:46:18.53840151Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-20T07:46:18.538681888Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=279.728µs grafana | logger=migrator t=2025-06-20T07:46:18.54401319Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-20T07:46:18.544364989Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=351.519µs grafana | logger=migrator t=2025-06-20T07:46:18.550160534Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:18.562963096Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.798362ms grafana | logger=migrator t=2025-06-20T07:46:18.569566332Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.579788386Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.218524ms grafana | logger=migrator t=2025-06-20T07:46:18.585229821Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-20T07:46:18.593987105Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=8.755754ms grafana | logger=migrator t=2025-06-20T07:46:18.602111942Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-20T07:46:18.610798984Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=8.686022ms grafana | logger=migrator t=2025-06-20T07:46:18.614769909Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-20T07:46:18.615229501Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=459.422µs grafana | logger=migrator t=2025-06-20T07:46:18.623350528Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:18.630322365Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=6.971177ms grafana | logger=migrator t=2025-06-20T07:46:18.641202796Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.648173661Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=6.970265ms grafana | logger=migrator t=2025-06-20T07:46:18.651826809Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-20T07:46:18.652016545Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=189.896µs grafana | logger=migrator t=2025-06-20T07:46:18.657559983Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-20T07:46:18.657928572Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=368.089µs grafana | logger=migrator t=2025-06-20T07:46:18.664784105Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-20T07:46:18.666882691Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=2.098106ms grafana | logger=migrator t=2025-06-20T07:46:18.67398873Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-20T07:46:18.674017221Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=32.511µs grafana | logger=migrator t=2025-06-20T07:46:18.681509121Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-20T07:46:18.681536782Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=29.191µs grafana | logger=migrator t=2025-06-20T07:46:18.687578573Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-20T07:46:18.68820454Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=622.867µs grafana | logger=migrator t=2025-06-20T07:46:18.692717931Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.705255106Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.537125ms grafana | logger=migrator t=2025-06-20T07:46:18.71138514Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:18.7248874Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=13.502871ms grafana | logger=migrator t=2025-06-20T07:46:18.728772964Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-20T07:46:18.729736039Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=962.725µs grafana | logger=migrator t=2025-06-20T07:46:18.73840426Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-20T07:46:18.74023984Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.83462ms grafana | logger=migrator t=2025-06-20T07:46:18.744606887Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-20T07:46:18.755876188Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.269131ms grafana | logger=migrator t=2025-06-20T07:46:18.762671969Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.770327173Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.653814ms grafana | logger=migrator t=2025-06-20T07:46:18.776655852Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-20T07:46:18.776682643Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-20T07:46:18.776928969Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-20T07:46:18.77695274Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=294.798µs grafana | logger=migrator t=2025-06-20T07:46:18.780829744Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-20T07:46:18.78145134Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=620.946µs grafana | logger=migrator t=2025-06-20T07:46:18.785786556Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-20T07:46:18.787863731Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.079916ms grafana | logger=migrator t=2025-06-20T07:46:18.796399949Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-20T07:46:18.797737145Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.337116ms grafana | logger=migrator t=2025-06-20T07:46:18.803350245Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-20T07:46:18.805444711Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.091515ms grafana | logger=migrator t=2025-06-20T07:46:18.809872669Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-20T07:46:18.811210694Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.336525ms grafana | logger=migrator t=2025-06-20T07:46:18.818192231Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:18.828253849Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.063538ms grafana | logger=migrator t=2025-06-20T07:46:18.846863187Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:18.860289965Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=13.421818ms grafana | logger=migrator t=2025-06-20T07:46:18.864575059Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-20T07:46:18.871825963Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=7.249504ms grafana | logger=migrator t=2025-06-20T07:46:18.877535156Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-20T07:46:18.887492292Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.956236ms grafana | logger=migrator t=2025-06-20T07:46:18.892997159Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-20T07:46:18.893182344Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-20T07:46:18.893198334Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=201.155µs grafana | logger=migrator t=2025-06-20T07:46:18.89863457Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-20T07:46:18.899962885Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.327664ms grafana | logger=migrator t=2025-06-20T07:46:18.905215015Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.876580207s grafana | logger=migrator t=2025-06-20T07:46:18.905918463Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-20T07:46:18.923828532Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-20T07:46:18.92412752Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-20T07:46:18.93126828Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-20T07:46:19.019288641Z level=info msg="Restored cache from database" duration=482.214µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.029828242Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-20T07:46:19.029858273Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-20T07:46:19.037823555Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-20T07:46:19.03872189Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=898.155µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.055603051Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-20T07:46:19.055633312Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=34.431µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.061480558Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-20T07:46:19.061819677Z level=info msg="Migration successfully executed" id="drop table resource" duration=337.528µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.065930586Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-20T07:46:19.067807106Z level=info msg="Migration successfully executed" id="create table resource" duration=1.87648ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.073539749Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-20T07:46:19.074948897Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.408988ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.082328214Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.082634332Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=305.938µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.086895606Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.088873058Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.976882ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.094810948Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-20T07:46:19.096267886Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.456588ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.10352748Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-20T07:46:19.1057538Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=2.2263ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.111128993Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-20T07:46:19.111400021Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=272.308µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.11698662Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-20T07:46:19.117969146Z level=info msg="Migration successfully executed" id="create table resource_version" duration=982.306µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.121430608Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-20T07:46:19.122772164Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.346786ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.126267097Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-20T07:46:19.126416891Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=149.224µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.133315415Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-20T07:46:19.13459232Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.276755ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.138410621Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-20T07:46:19.14058232Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.170768ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.146494217Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-20T07:46:19.148157612Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.664145ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.156252888Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.169942353Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.690885ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.174610988Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-20T07:46:19.186239678Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=11.62052ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.191424597Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-20T07:46:19.192336741Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=912.364µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.1979201Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-20T07:46:19.200338325Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=2.417245ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.204036174Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.214933974Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.89709ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.218243844Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-20T07:46:19.227370207Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.124783ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.234542328Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-20T07:46:19.234589869Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-20T07:46:19.23533Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=787.262µs grafana | logger=resource-migrator t=2025-06-20T07:46:19.240707533Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-20T07:46:19.242285355Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.577612ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.263700227Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.27918325Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=15.482063ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.282860429Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-20T07:46:19.284354298Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.493669ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.289145486Z level=info msg="migrations completed" performed=26 skipped=0 duration=251.368962ms grafana | logger=resource-migrator t=2025-06-20T07:46:19.289931547Z level=info msg="Unlocking database" grafana | t=2025-06-20T07:46:19.290322337Z level=info caller=logger.go:214 time=2025-06-20T07:46:19.290274586Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-20T07:46:19.303608332Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-20T07:46:19.340733203Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-20T07:46:19.340760634Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-20T07:46:19.340791765Z level=info msg="Plugins loaded" count=53 duration=37.184113ms grafana | logger=query_data t=2025-06-20T07:46:19.345899542Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-20T07:46:19.350522535Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-20T07:46:19.362925896Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-20T07:46:19.370046737Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-20T07:46:19.370067757Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-20T07:46:19.373908199Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-20T07:46:19.374807564Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:19.379274593Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=ngalert.state.manager t=2025-06-20T07:46:19.380908037Z level=info msg="Warming state cache for startup" grafana | logger=http.server t=2025-06-20T07:46:19.381591675Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-20T07:46:19.382468578Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.state.manager t=2025-06-20T07:46:19.467802497Z level=info msg="State cache has been initialized" states=0 duration=86.89568ms grafana | logger=ngalert.scheduler t=2025-06-20T07:46:19.467850878Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-20T07:46:19.4679028Z level=info msg=starting first_tick=2025-06-20T07:46:20Z grafana | logger=grafana.update.checker t=2025-06-20T07:46:19.479326345Z level=info msg="Update check succeeded" duration=99.662122ms grafana | logger=plugins.update.checker t=2025-06-20T07:46:19.482386926Z level=info msg="Update check succeeded" duration=103.74712ms grafana | logger=provisioning.datasources t=2025-06-20T07:46:19.489945988Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-20T07:46:19.502235846Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-20T07:46:19.516663631Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=provisioning.alerting t=2025-06-20T07:46:19.534198809Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-20T07:46:19.53422108Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-20T07:46:19.535250068Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-20T07:46:19.62446147Z level=info msg="Patterns update finished" duration=95.807389ms grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.724415198Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.725065917Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.725600401Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.729153645Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.729853284Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.730336616Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.730892312Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.731836827Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-20T07:46:19.733574934Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-20T07:46:19.781527774Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-20T07:46:20.189381734Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-20T07:46:20.288117341Z level=info msg="finished to provision dashboards" grafana | logger=installer.fs t=2025-06-20T07:46:20.323724161Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-20T07:46:20.356411984Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:20.356436795Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=977.129521ms grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:20.356458085Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-20T07:46:20.734608823Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-20T07:46:20.786804677Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-20T07:46:20.802365612Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:20.802400982Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=445.937587ms grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:20.802421273Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-20T07:46:21.134871Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-20T07:46:21.194513392Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-20T07:46:21.210608273Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:21.210636043Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=408.20917ms grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:21.210655974Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-20T07:46:21.477194841Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-20T07:46:21.542082803Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-20T07:46:21.560717971Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-20T07:46:21.560746962Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=350.085838ms grafana | logger=infra.usagestats t=2025-06-20T07:47:35.385682578Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-20 07:46:10,543] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,544] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,547] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,552] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-20 07:46:10,557] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-20 07:46:10,564] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:10,590] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:10,591] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:10,599] INFO Socket connection established, initiating session, client: /172.17.0.5:33322, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:10,631] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000245530000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:10,769] INFO Session: 0x100000245530000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:10,769] INFO EventThread shut down for session: 0x100000245530000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-20 07:46:11,497] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-20 07:46:11,796] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-20 07:46:11,866] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-20 07:46:11,867] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-20 07:46:11,867] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-20 07:46:11,880] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-20 07:46:11,883] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,883] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,884] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,884] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,885] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-20 07:46:11,888] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-20 07:46:11,894] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:11,895] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-20 07:46:11,898] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:11,903] INFO Socket connection established, initiating session, client: /172.17.0.5:53294, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:11,910] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000245530001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-20 07:46:11,914] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-20 07:46:12,232] INFO Cluster ID = 6vY-6QxeRAqELjIL4Qvq3A (kafka.server.KafkaServer) kafka | [2025-06-20 07:46:12,235] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-20 07:46:12,280] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-20 07:46:12,313] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-20 07:46:12,314] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-20 07:46:12,314] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-20 07:46:12,318] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-20 07:46:12,349] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-20 07:46:12,351] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-20 07:46:12,364] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) kafka | [2025-06-20 07:46:12,364] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-20 07:46:12,366] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-20 07:46:12,376] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-20 07:46:12,426] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-20 07:46:12,450] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-20 07:46:12,467] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-20 07:46:12,512] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-20 07:46:12,848] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-20 07:46:12,851] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-20 07:46:12,873] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-20 07:46:12,873] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-20 07:46:12,873] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-20 07:46:12,877] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-20 07:46:12,882] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-20 07:46:12,905] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:12,909] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:12,909] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:12,910] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:12,928] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-20 07:46:12,950] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-20 07:46:12,980] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750405572965,1750405572965,1,0,0,72057603790929921,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-20 07:46:12,981] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-20 07:46:13,037] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-20 07:46:13,043] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:13,049] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:13,050] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:13,064] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-20 07:46:13,071] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:13,078] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,078] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:13,086] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,091] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-20 07:46:13,095] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-20 07:46:13,098] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-20 07:46:13,099] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-20 07:46:13,136] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-20 07:46:13,137] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-20 07:46:13,137] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,144] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,157] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,161] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,188] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-20 07:46:13,188] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,193] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,199] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-20 07:46:13,205] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-20 07:46:13,211] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-20 07:46:13,213] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,213] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,213] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,214] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,218] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,218] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,219] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,219] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-20 07:46:13,220] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,228] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-20 07:46:13,230] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-20 07:46:13,230] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-20 07:46:13,230] INFO Kafka startTimeMs: 1750405573221 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-20 07:46:13,232] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-20 07:46:13,235] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-20 07:46:13,236] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-20 07:46:13,245] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-20 07:46:13,245] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-20 07:46:13,245] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-20 07:46:13,246] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-20 07:46:13,249] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-20 07:46:13,249] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,255] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-20 07:46:13,265] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,266] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,266] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,267] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,269] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,283] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:13,328] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-20 07:46:13,334] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-20 07:46:13,385] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-20 07:46:18,286] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:18,287] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:44,765] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-20 07:46:44,767] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:44,770] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-20 07:46:44,774] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:44,818] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(KFspdrMiSbqeJN5Un5f9TA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(ouJXnXx-Sr2LUwtAuooAxQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:44,821] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,825] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,826] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,827] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-20 07:46:44,831] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-20 07:46:44,836] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,836] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,836] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-20 07:46:44,842] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,037] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,038] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,039] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,039] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,039] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-20 07:46:45,042] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-20 07:46:45,043] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-20 07:46:45,044] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-20 07:46:45,046] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-20 07:46:45,050] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,051] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-20 07:46:45,053] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,059] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,061] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,062] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,063] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,064] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,064] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,064] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-20 07:46:45,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-20 07:46:45,112] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-20 07:46:45,114] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-20 07:46:45,114] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,179] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,193] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,195] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,195] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,197] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,212] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,213] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,213] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,213] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,214] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,226] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,226] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,227] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,227] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,227] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,241] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,242] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,242] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,242] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,243] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,253] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,254] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,254] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,254] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,255] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,275] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,276] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,277] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,277] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,277] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,287] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,288] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,288] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,288] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,289] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,299] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,300] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,300] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,300] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,300] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,313] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,315] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,315] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,315] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,315] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,324] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,325] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,325] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,325] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,325] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,334] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,335] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,335] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,335] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,335] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,346] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,347] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,347] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,348] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,348] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,357] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,358] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,358] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,358] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,358] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,368] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,369] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,369] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,369] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,370] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,381] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,382] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,383] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,383] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,383] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,393] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,394] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,394] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,394] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,394] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,403] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,404] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,404] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,404] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,404] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,414] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,414] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,414] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,414] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,415] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,426] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,427] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,427] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,428] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,428] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,436] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,437] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,437] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,437] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,437] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,444] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,445] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,445] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,445] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,445] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,456] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,457] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,457] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,457] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,457] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,472] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,473] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,473] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,473] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,473] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,484] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,485] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,485] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,485] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,485] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,495] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,495] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,495] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,495] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,495] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,509] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,510] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,510] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,510] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,510] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,518] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,519] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,519] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,519] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,519] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,528] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,529] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,529] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,529] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,529] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,539] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,540] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,540] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,540] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,540] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,550] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,550] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,550] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,550] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,550] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,557] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,558] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,558] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,558] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,558] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,567] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,567] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,568] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,568] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,568] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,577] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,577] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,577] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,577] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,577] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,586] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,587] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,587] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,587] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,587] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,601] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,602] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,602] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,602] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,602] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(KFspdrMiSbqeJN5Un5f9TA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,615] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,617] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,617] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,617] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,617] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,626] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,627] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,627] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,627] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,627] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,636] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,636] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,636] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,636] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,637] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,645] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,646] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,646] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,646] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,646] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,661] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,661] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,661] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,661] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,662] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,671] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,672] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,672] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,672] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,672] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,679] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,680] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,680] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,680] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,680] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,688] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,688] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,688] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,689] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,689] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,697] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,698] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,698] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,698] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,698] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,707] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,708] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,708] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,708] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,708] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,715] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,716] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,716] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,716] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,716] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,730] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,731] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,731] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,731] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,731] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,741] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,741] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,742] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,742] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,742] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,751] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,752] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,752] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,752] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,752] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,763] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,766] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,766] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,766] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,767] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,776] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-20 07:46:45,777] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-20 07:46:45,777] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,777] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-20 07:46:45,777] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(ouJXnXx-Sr2LUwtAuooAxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-20 07:46:45,785] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-20 07:46:45,786] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-20 07:46:45,787] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-20 07:46:45,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-20 07:46:45,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-20 07:46:45,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-20 07:46:45,791] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-20 07:46:45,801] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,803] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,806] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,806] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,817] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,817] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,818] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,818] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 3 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,821] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:45,821] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,824] INFO [Broker id=1] Finished LeaderAndIsr request in 767ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-20 07:46:45,825] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,826] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,827] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-20 07:46:45,833] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ouJXnXx-Sr2LUwtAuooAxQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=KFspdrMiSbqeJN5Un5f9TA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,841] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,842] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-20 07:46:45,844] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-20 07:46:46,579] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:46,579] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 14a2c8c5-4585-4382-b57e-7c1f1bc94225 in Empty state. Created a new member id consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:46,601] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:46,604] INFO [GroupCoordinator 1]: Preparing to rebalance group 14a2c8c5-4585-4382-b57e-7c1f1bc94225 in state PreparingRebalance with old generation 0 (__consumer_offsets-30) (reason: Adding new member consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4 with group instance id None; client reason: need to re-join with the given member-id: consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:47,732] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5a4dad8c-e056-4ff3-8f02-267c7433f80f in Empty state. Created a new member id consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:47,737] INFO [GroupCoordinator 1]: Preparing to rebalance group 5a4dad8c-e056-4ff3-8f02-267c7433f80f in state PreparingRebalance with old generation 0 (__consumer_offsets-46) (reason: Adding new member consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a with group instance id None; client reason: need to re-join with the given member-id: consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:49,618] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:49,622] INFO [GroupCoordinator 1]: Stabilized group 14a2c8c5-4585-4382-b57e-7c1f1bc94225 generation 1 (__consumer_offsets-30) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:49,645] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:49,646] INFO [GroupCoordinator 1]: Assignment received from leader consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4 for group 14a2c8c5-4585-4382-b57e-7c1f1bc94225 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:50,739] INFO [GroupCoordinator 1]: Stabilized group 5a4dad8c-e056-4ff3-8f02-267c7433f80f generation 1 (__consumer_offsets-46) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-20 07:46:50,758] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a for group 5a4dad8c-e056-4ff3-8f02-267c7433f80f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-20T07:46:22.976+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-20T07:46:23.055+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 34 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-20T07:46:23.057+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-20T07:46:24.476+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-20T07:46:24.641+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 154 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-20T07:46:25.297+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-20T07:46:25.317+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-20T07:46:25.320+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-20T07:46:25.320+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-20T07:46:25.368+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-20T07:46:25.369+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2255 ms policy-api | [2025-06-20T07:46:25.695+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-20T07:46:25.792+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-20T07:46:25.843+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-20T07:46:26.260+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-20T07:46:26.300+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-20T07:46:26.517+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1ab21633 policy-api | [2025-06-20T07:46:26.520+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-20T07:46:26.600+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-20T07:46:28.717+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-20T07:46:28.720+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-20T07:46:29.345+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-20T07:46:30.224+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-20T07:46:31.405+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-20T07:46:31.452+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-20T07:46:32.114+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-20T07:46:32.244+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-20T07:46:32.259+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-20T07:46:32.289+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.97 seconds (process running for 10.638) policy-api | [2025-06-20T07:46:39.927+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-20T07:46:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-20T07:46:39.929+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.254897 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.302725 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.367454 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.420214 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.465053 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.512421 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.570402 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.617421 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.666756 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.719596 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.774618 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.82679 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.887303 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.932787 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:10.983666 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.044591 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.099633 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.149554 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.204462 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.256957 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.302481 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.357291 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.409151 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.458199 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.504617 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.551726 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.608065 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.658329 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.713114 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.766336 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.825558 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.889542 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.946438 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:11.995688 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.062619 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.119745 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.177251 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.244604 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.303418 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.366919 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.424341 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.486327 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.532821 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.580425 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.634813 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.685085 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.739144 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.794495 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.851259 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.902176 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:12.956078 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.013013 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.06923 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.132433 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.182883 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.226891 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.281102 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.340951 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.39573 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.460006 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.520181 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.57535 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.637811 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.695225 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.760463 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.818136 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.87384 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.926994 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:13.981726 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.035853 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.101665 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.162923 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.224027 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.276185 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.329309 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.38095 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.438357 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.495675 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.548857 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.609185 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.682717 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.738245 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.790093 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.844277 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.896131 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:14.948902 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.00334 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.063689 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.128672 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.178803 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.228956 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.279412 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.345844 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.397062 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.445545 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2006250746100800u | 1 | 2025-06-20 07:46:15.49316 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.558025 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.610622 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.663584 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.71012 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.772093 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.830862 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.882654 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.932257 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:15.980937 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:16.04043 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:16.089043 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:16.158384 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 2006250746100900u | 1 | 2025-06-20 07:46:16.211131 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.268761 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.331082 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.385594 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.439617 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.486849 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.540275 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.602335 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.65847 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 2006250746101000u | 1 | 2025-06-20 07:46:16.71715 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 2006250746101100u | 1 | 2025-06-20 07:46:16.767892 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 2006250746101200u | 1 | 2025-06-20 07:46:16.818952 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 2006250746101200u | 1 | 2025-06-20 07:46:16.87419 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 2006250746101200u | 1 | 2025-06-20 07:46:16.9306 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 2006250746101200u | 1 | 2025-06-20 07:46:16.985012 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 2006250746101300u | 1 | 2025-06-20 07:46:17.037247 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 2006250746101300u | 1 | 2025-06-20 07:46:17.089536 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 2006250746101300u | 1 | 2025-06-20 07:46:17.138783 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:17.832152 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:17.893092 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:17.961198 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.023749 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.078511 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.140183 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.193334 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.250187 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.309617 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.364374 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.415849 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.486859 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 2006250746171400u | 1 | 2025-06-20 07:46:18.539757 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.588859 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.636487 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.68668 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.729602 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.783208 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.837813 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.893385 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 2006250746171500u | 1 | 2025-06-20 07:46:18.942623 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 2006250746171600u | 1 | 2025-06-20 07:46:18.992984 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 2006250746171600u | 1 | 2025-06-20 07:46:19.044444 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 2006250746171601u | 1 | 2025-06-20 07:46:19.102602 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 2006250746171601u | 1 | 2025-06-20 07:46:19.152292 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 2006250746171700u | 1 | 2025-06-20 07:46:19.212033 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 2006250746171700u | 1 | 2025-06-20 07:46:19.26721 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 2006250746171700u | 1 | 2025-06-20 07:46:19.319604 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.379094 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.438474 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.485653 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.535569 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.58561 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.634698 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.700668 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.753604 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 2006250746171701u | 1 | 2025-06-20 07:46:19.809283 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 2006250746201600u | 1 | 2025-06-20 07:46:20.466096 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 2006250746211600u | 1 | 2025-06-20 07:46:21.126928 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 2006250746211600u | 1 | 2025-06-20 07:46:21.199931 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-drools-pdp | Waiting for pap port 6969... policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.8) port 6969 (tcp) failed: Connection refused policy-drools-pdp | Connection to pap (172.17.0.8) 6969 port [tcp/*] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | -- configure -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | + set -x policy-drools-pdp | + systemConfs policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | -- reload -- policy-drools-pdp | -- systemConfs -- policy-drools-pdp | -- maven -- policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | -- features -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | + set -x policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | -- security -- policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | -- scripts -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | + set -x policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | -- policy_exec -- policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- exec_start -- policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- status --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | -- status -- policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | + _PID= policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | + set -x policy-drools-pdp | -- preRunning -- policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | + xargs -I X printf ':%s' X policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.13.0.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.15.11.jar /opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.48.3.jar /opt/app/policy/lib/classgraph-4.8.179.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.11.0.jar /opt/app/policy/lib/commons-cli-1.9.0.jar /opt/app/policy/lib/commons-codec-1.18.0.jar /opt/app/policy/lib/commons-collections-3.2.2.jar /opt/app/policy/lib/commons-collections4-4.5.0-M3.jar /opt/app/policy/lib/commons-configuration2-2.11.0.jar /opt/app/policy/lib/commons-digester-2.1.jar /opt/app/policy/lib/commons-io-2.18.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.17.0.jar /opt/app/policy/lib/commons-logging-1.3.5.jar /opt/app/policy/lib/commons-net-3.11.1.jar /opt/app/policy/lib/commons-text-1.13.0.jar /opt/app/policy/lib/commons-validator-1.8.0.jar /opt/app/policy/lib/core-0.12.4.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.36.0.jar /opt/app/policy/lib/failureaccess-1.0.3.jar /opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-2.12.1.jar /opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.4.6-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/handy-uri-templates-2.1.8.jar /opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar /opt/app/policy/lib/hibernate-core-6.6.16.Final.jar /opt/app/policy/lib/hk2-api-3.0.6.jar /opt/app/policy/lib/hk2-locator-3.0.6.jar /opt/app/policy/lib/hk2-utils-3.0.6.jar /opt/app/policy/lib/httpclient-4.5.13.jar /opt/app/policy/lib/httpcore-4.4.15.jar /opt/app/policy/lib/icu4j-74.2.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-3.0.0.jar /opt/app/policy/lib/jackson-annotations-2.18.3.jar /opt/app/policy/lib/jackson-core-2.18.3.jar /opt/app/policy/lib/jackson-databind-2.18.3.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar /opt/app/policy/lib/jakarta.activation-api-2.1.3.jar /opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-2.6.1.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.1.1.jar /opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-3.2.0.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.30.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar /opt/app/policy/lib/jcodings-1.0.58.jar /opt/app/policy/lib/jersey-client-3.1.10.jar /opt/app/policy/lib/jersey-common-3.1.10.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar /opt/app/policy/lib/jersey-hk2-3.1.10.jar /opt/app/policy/lib/jersey-server-3.1.10.jar /opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar /opt/app/policy/lib/jetty-http-12.0.21.jar /opt/app/policy/lib/jetty-io-12.0.21.jar /opt/app/policy/lib/jetty-security-12.0.21.jar /opt/app/policy/lib/jetty-server-12.0.21.jar /opt/app/policy/lib/jetty-session-12.0.21.jar /opt/app/policy/lib/jetty-util-12.0.21.jar /opt/app/policy/lib/joda-time-2.10.2.jar /opt/app/policy/lib/joni-2.2.1.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsoup-1.17.2.jar /opt/app/policy/lib/jspecify-1.0.0.jar /opt/app/policy/lib/kafka-clients-3.9.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.5.18.jar /opt/app/policy/lib/logback-core-1.5.18.jar /opt/app/policy/lib/lombok-1.18.38.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.43.0.jar /opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar /opt/app/policy/lib/opentelemetry-context-1.43.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.6.0.jar /opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.5.jar /opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.8.jar /opt/app/policy/lib/slf4j-api-2.0.17.jar /opt/app/policy/lib/snakeyaml-2.4.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.29.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.11.0.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + export 'M2_HOME=/home/policy/.m2' policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.11.0.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | [2025-06-20T07:46:45.445+00:00|INFO|LifecycleFsm|main] The mandatory Policy Types are []. Compliance is true policy-drools-pdp | [2025-06-20T07:46:45.448+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [org.onap.policy.drools.lifecycle.LifecycleFeature@2235eaab] policy-drools-pdp | [2025-06-20T07:46:45.456+00:00|INFO|PolicyContainer|main] PolicyContainer.main: configDir=config policy-drools-pdp | [2025-06-20T07:46:45.457+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-20T07:46:45.465+00:00|INFO|IndexedKafkaTopicSourceFactory|main] IndexedKafkaTopicSourceFactory []: no topic for KAFKA Source policy-drools-pdp | [2025-06-20T07:46:45.467+00:00|INFO|IndexedKafkaTopicSinkFactory|main] IndexedKafkaTopicSinkFactory []: no topic for KAFKA Sink policy-drools-pdp | [2025-06-20T07:46:45.874+00:00|INFO|PolicyEngineManager|main] lock manager is org.onap.policy.drools.system.internal.SimpleLockManager@376a312c policy-drools-pdp | [2025-06-20T07:46:45.884+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START policy-drools-pdp | [2025-06-20T07:46:45.897+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-drools-pdp | [2025-06-20T07:46:45.901+00:00|INFO|JettyServletServer|CONFIG-9696] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-drools-pdp | [2025-06-20T07:46:45.907+00:00|INFO|Server|CONFIG-9696] jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 policy-drools-pdp | [2025-06-20T07:46:45.946+00:00|INFO|DefaultSessionIdManager|CONFIG-9696] Session workerName=node0 policy-drools-pdp | [2025-06-20T07:46:45.955+00:00|INFO|ContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 20, 2025 7:46:46 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | [2025-06-20T07:46:46.799+00:00|INFO|GsonMessageBodyHandler|CONFIG-9696] Using GSON for REST calls policy-drools-pdp | [2025-06-20T07:46:46.800+00:00|INFO|JacksonHandler|CONFIG-9696] Using GSON with Jackson behaviors for REST calls policy-drools-pdp | [2025-06-20T07:46:46.802+00:00|INFO|YamlMessageBodyHandler|CONFIG-9696] Accepting YAML for REST calls policy-drools-pdp | [2025-06-20T07:46:46.989+00:00|INFO|ServletContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | [2025-06-20T07:46:47.001+00:00|INFO|AbstractConnector|CONFIG-9696] Started CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696} policy-drools-pdp | [2025-06-20T07:46:47.004+00:00|INFO|Server|CONFIG-9696] Started oejs.Server@3276732{STARTING}[12.0.21,sto=0] @2608ms policy-drools-pdp | [2025-06-20T07:46:47.005+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 8894 ms. policy-drools-pdp | [2025-06-20T07:46:47.014+00:00|INFO|LifecycleFsm|main] lifecycle event: start engine policy-drools-pdp | [2025-06-20T07:46:47.165+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-1 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 5a4dad8c-e056-4ff3-8f02-267c7433f80f policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-20T07:46:47.203+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-20T07:46:47.277+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-20T07:46:47.278+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-20T07:46:47.278+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405607276 policy-drools-pdp | [2025-06-20T07:46:47.280+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-1, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-20T07:46:47.280+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5a4dad8c-e056-4ff3-8f02-267c7433f80f, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e6308a9 policy-drools-pdp | [2025-06-20T07:46:47.294+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5a4dad8c-e056-4ff3-8f02-267c7433f80f, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-20T07:46:47.295+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 5a4dad8c-e056-4ff3-8f02-267c7433f80f policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-20T07:46:47.295+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-20T07:46:47.305+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-20T07:46:47.305+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-20T07:46:47.305+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405607305 policy-drools-pdp | [2025-06-20T07:46:47.306+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-20T07:46:47.307+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5a4dad8c-e056-4ff3-8f02-267c7433f80f, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-20T07:46:47.311+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12fc0de8-2d36-4aa6-b5b9-138e359a59cf, alive=false, publisher=null]]: starting policy-drools-pdp | [2025-06-20T07:46:47.323+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-drools-pdp | acks = -1 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | batch.size = 16384 policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | buffer.memory = 33554432 policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = producer-1 policy-drools-pdp | compression.gzip.level = -1 policy-drools-pdp | compression.lz4.level = 9 policy-drools-pdp | compression.type = none policy-drools-pdp | compression.zstd.level = 3 policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | delivery.timeout.ms = 120000 policy-drools-pdp | enable.idempotence = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | linger.ms = 0 policy-drools-pdp | max.block.ms = 60000 policy-drools-pdp | max.in.flight.requests.per.connection = 5 policy-drools-pdp | max.request.size = 1048576 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.max.idle.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partitioner.adaptive.partitioning.enable = true policy-drools-pdp | partitioner.availability.timeout.ms = 0 policy-drools-pdp | partitioner.class = null policy-drools-pdp | partitioner.ignore.keys = false policy-drools-pdp | receive.buffer.bytes = 32768 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retries = 2147483647 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | transaction.timeout.ms = 60000 policy-drools-pdp | transactional.id = null policy-drools-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | policy-drools-pdp | [2025-06-20T07:46:47.324+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-20T07:46:47.336+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-drools-pdp | [2025-06-20T07:46:47.355+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-20T07:46:47.355+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-20T07:46:47.355+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405607354 policy-drools-pdp | [2025-06-20T07:46:47.356+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12fc0de8-2d36-4aa6-b5b9-138e359a59cf, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-drools-pdp | [2025-06-20T07:46:47.360+00:00|INFO|LifecycleStateDefault|main] LifecycleStateTerminated(): state-change from TERMINATED to PASSIVE policy-drools-pdp | [2025-06-20T07:46:47.360+00:00|INFO|LifecycleFsm|pool-2-thread-1] lifecycle event: status policy-drools-pdp | [2025-06-20T07:46:47.361+00:00|INFO|MdcTransactionImpl|main] policy-drools-pdp | [2025-06-20T07:46:47.365+00:00|INFO|Main|main] Started policy-drools-pdp service successfully. policy-drools-pdp | [2025-06-20T07:46:47.380+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-20T07:46:47.708+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-drools-pdp | [2025-06-20T07:46:47.708+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-drools-pdp | [2025-06-20T07:46:47.709+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-drools-pdp | [2025-06-20T07:46:47.711+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-drools-pdp | [2025-06-20T07:46:47.716+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] (Re-)joining group policy-drools-pdp | [2025-06-20T07:46:47.734+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Request joining group due to: need to re-join with the given member-id: consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a policy-drools-pdp | [2025-06-20T07:46:47.734+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] (Re-)joining group policy-drools-pdp | [2025-06-20T07:46:50.741+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a', protocol='range'} policy-drools-pdp | [2025-06-20T07:46:50.753+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Finished assignment for group at generation 1: {consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a=Assignment(partitions=[policy-pdp-pap-0])} policy-drools-pdp | [2025-06-20T07:46:50.762+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2-63df8d44-2a77-4c6b-9e73-3638d1da038a', protocol='range'} policy-drools-pdp | [2025-06-20T07:46:50.763+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-drools-pdp | [2025-06-20T07:46:50.766+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Adding newly assigned partitions: policy-pdp-pap-0 policy-drools-pdp | [2025-06-20T07:46:50.774+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Found no committed offset for partition policy-pdp-pap-0 policy-drools-pdp | [2025-06-20T07:46:50.785+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5a4dad8c-e056-4ff3-8f02-267c7433f80f-2, groupId=5a4dad8c-e056-4ff3-8f02-267c7433f80f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-20T07:46:34.366+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 55 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-20T07:46:34.368+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-20T07:46:35.834+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-20T07:46:35.930+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-20T07:46:36.963+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-20T07:46:36.978+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-20T07:46:36.989+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-20T07:46:36.989+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-20T07:46:37.052+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-20T07:46:37.053+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2626 ms policy-pap | [2025-06-20T07:46:37.504+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-20T07:46:37.583+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-20T07:46:37.629+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-20T07:46:38.053+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-20T07:46:38.100+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-20T07:46:38.343+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-20T07:46:38.345+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-20T07:46:38.440+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-20T07:46:40.594+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-20T07:46:40.598+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-20T07:46:41.929+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 14a2c8c5-4585-4382-b57e-7c1f1bc94225 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-20T07:46:41.993+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:42.148+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:42.148+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:42.148+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405602144 policy-pap | [2025-06-20T07:46:42.151+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-1, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-20T07:46:42.152+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-20T07:46:42.153+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:42.162+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:42.162+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:42.162+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405602162 policy-pap | [2025-06-20T07:46:42.162+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-20T07:46:42.566+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-20T07:46:42.698+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-20T07:46:42.785+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-20T07:46:43.027+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-20T07:46:43.954+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-20T07:46:44.081+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-20T07:46:44.118+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-20T07:46:44.141+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-20T07:46:44.141+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-20T07:46:44.142+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-20T07:46:44.142+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-20T07:46:44.142+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-20T07:46:44.143+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-20T07:46:44.143+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-20T07:46:44.145+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=14a2c8c5-4585-4382-b57e-7c1f1bc94225, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7af66b8a policy-pap | [2025-06-20T07:46:44.169+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=14a2c8c5-4585-4382-b57e-7c1f1bc94225, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-20T07:46:44.170+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 14a2c8c5-4585-4382-b57e-7c1f1bc94225 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-20T07:46:44.171+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:44.178+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:44.178+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:44.178+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405604178 policy-pap | [2025-06-20T07:46:44.178+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-20T07:46:44.179+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-20T07:46:44.179+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ab54bd75-3100-4101-9e1a-b32bbce3884c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3f2fd933 policy-pap | [2025-06-20T07:46:44.179+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ab54bd75-3100-4101-9e1a-b32bbce3884c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-20T07:46:44.179+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-20T07:46:44.180+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:44.185+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:44.185+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:44.185+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405604185 policy-pap | [2025-06-20T07:46:44.185+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-20T07:46:44.186+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-20T07:46:44.186+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ab54bd75-3100-4101-9e1a-b32bbce3884c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-20T07:46:44.186+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=14a2c8c5-4585-4382-b57e-7c1f1bc94225, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-20T07:46:44.186+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c4b1e287-cf99-4056-b83f-c4dc708e5250, alive=false, publisher=null]]: starting policy-pap | [2025-06-20T07:46:44.200+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-20T07:46:44.201+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:44.215+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-20T07:46:44.233+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:44.233+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:44.233+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405604233 policy-pap | [2025-06-20T07:46:44.233+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c4b1e287-cf99-4056-b83f-c4dc708e5250, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-20T07:46:44.233+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=61d52f1a-4ed5-43af-b9ae-42fb669a8428, alive=false, publisher=null]]: starting policy-pap | [2025-06-20T07:46:44.234+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-20T07:46:44.234+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-20T07:46:44.234+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750405604241 policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=61d52f1a-4ed5-43af-b9ae-42fb669a8428, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-20T07:46:44.241+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-20T07:46:44.242+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-20T07:46:44.242+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-20T07:46:44.249+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-20T07:46:44.249+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-20T07:46:44.250+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-20T07:46:44.250+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-20T07:46:44.251+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-20T07:46:44.252+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-20T07:46:44.253+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.721 seconds (process running for 11.317) policy-pap | [2025-06-20T07:46:44.256+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-20T07:46:44.745+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-20T07:46:44.745+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-pap | [2025-06-20T07:46:44.745+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-pap | [2025-06-20T07:46:44.746+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-pap | [2025-06-20T07:46:44.784+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-20T07:46:44.784+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-20T07:46:44.807+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:44.807+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Cluster ID: 6vY-6QxeRAqELjIL4Qvq3A policy-pap | [2025-06-20T07:46:44.936+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-20T07:46:44.941+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:45.168+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:45.174+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:45.613+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:45.655+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-20T07:46:46.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-20T07:46:46.534+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-20T07:46:46.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-20T07:46:46.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] (Re-)joining group policy-pap | [2025-06-20T07:46:46.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598 policy-pap | [2025-06-20T07:46:46.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-20T07:46:46.596+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Request joining group due to: need to re-join with the given member-id: consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4 policy-pap | [2025-06-20T07:46:46.597+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] (Re-)joining group policy-pap | [2025-06-20T07:46:49.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598', protocol='range'} policy-pap | [2025-06-20T07:46:49.624+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Successfully joined group with generation Generation{generationId=1, memberId='consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4', protocol='range'} policy-pap | [2025-06-20T07:46:49.634+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-20T07:46:49.634+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Finished assignment for group at generation 1: {consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-20T07:46:49.663+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Successfully synced group in generation Generation{generationId=1, memberId='consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3-6f522e93-8914-4e3c-81b5-57b84c5ce9f4', protocol='range'} policy-pap | [2025-06-20T07:46:49.664+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-20T07:46:49.665+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-11211561-e5bd-4986-b039-3eaf37fb0598', protocol='range'} policy-pap | [2025-06-20T07:46:49.666+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-20T07:46:49.668+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-20T07:46:49.669+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-20T07:46:49.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-20T07:46:49.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-20T07:46:49.717+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-14a2c8c5-4585-4382-b57e-7c1f1bc94225-3, groupId=14a2c8c5-4585-4382-b57e-7c1f1bc94225] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-20T07:46:49.717+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-20T07:47:41.608+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-20T07:47:41.608+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-20T07:47:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-20 07:46:07.927 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-20 07:46:07.929 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-20 07:46:07.938 UTC [50] LOG: database system was shut down at 2025-06-20 07:46:07 UTC postgres | 2025-06-20 07:46:07.944 UTC [47] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-20 07:46:09.233 UTC [47] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-20 07:46:09.236 UTC [47] LOG: aborting any active transactions postgres | 2025-06-20 07:46:09.240 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 postgres | 2025-06-20 07:46:09.240 UTC [48] LOG: shutting down postgres | 2025-06-20 07:46:09.242 UTC [48] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-20 07:46:09.673 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.327 s, sync=0.096 s, total=0.433 s; sync files=1788, longest=0.010 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-20 07:46:09.684 UTC [47] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-20 07:46:09.761 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-20 07:46:09.761 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-20 07:46:09.761 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-20 07:46:09.766 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-20 07:46:09.776 UTC [100] LOG: database system was shut down at 2025-06-20 07:46:09 UTC postgres | 2025-06-20 07:46:09.783 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-20T07:46:09.074Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-20T07:46:09.074Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-20T07:46:09.074Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-20T07:46:09.077Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-20T07:46:09.081Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-20T07:46:09.082Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-20T07:46:09.084Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-20T07:46:09.084Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-20T07:46:09.088Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-20T07:46:09.088Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.31µs prometheus | time=2025-06-20T07:46:09.088Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-20T07:46:09.088Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=412.251µs prometheus | time=2025-06-20T07:46:09.088Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=26.441µs wal_replay_duration=438.552µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.31µs total_replay_duration=538.444µs prometheus | time=2025-06-20T07:46:09.094Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-20T07:46:09.094Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-20T07:46:09.094Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-20T07:46:09.097Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-20T07:46:09.098Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.231µs remote_storage=2.57µs web_handler=820ns query_engine=1.89µs scrape=465.992µs scrape_sd=250.977µs notify=166.244µs notify_sd=31.091µs rules=2.44µs tracing=6.03µs filename=/etc/prometheus/prometheus.yml totalDuration=3.361869ms prometheus | time=2025-06-20T07:46:09.098Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-20T07:46:09.098Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-20 07:46:09,186] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,188] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,189] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,189] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,189] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,190] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-20 07:46:09,190] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-20 07:46:09,190] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-20 07:46:09,190] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-20 07:46:09,191] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-20 07:46:09,192] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,192] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,192] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,192] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,192] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-20 07:46:09,192] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-20 07:46:09,205] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-20 07:46:09,208] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-20 07:46:09,208] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-20 07:46:09,210] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-20 07:46:09,218] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,218] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,219] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,220] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,221] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-20 07:46:09,222] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,222] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,223] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-20 07:46:09,223] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,224] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-20 07:46:09,226] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,226] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,226] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-20 07:46:09,226] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-20 07:46:09,226] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,253] INFO Logging initialized @441ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-20 07:46:09,308] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-20 07:46:09,308] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-20 07:46:09,323] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-20 07:46:09,353] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-20 07:46:09,353] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-20 07:46:09,354] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-20 07:46:09,357] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-20 07:46:09,371] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-20 07:46:09,382] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-20 07:46:09,382] INFO Started @577ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-20 07:46:09,383] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-20 07:46:09,386] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-20 07:46:09,387] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-20 07:46:09,388] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-20 07:46:09,389] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-20 07:46:09,398] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-20 07:46:09,398] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-20 07:46:09,399] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-20 07:46:09,399] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-20 07:46:09,403] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-20 07:46:09,403] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-20 07:46:09,406] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-20 07:46:09,406] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-20 07:46:09,407] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-20 07:46:09,413] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-20 07:46:09,413] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-20 07:46:09,426] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-20 07:46:09,426] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-20 07:46:10,611] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container policy-drools-pdp Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2110 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins2016226856471499326.sh ---> sysstat.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins5728182211189323721.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins2564315265472942253.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xS1v from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-xS1v/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins10640750138297769860.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/config8665577337670094360tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins18204662477685711112.sh ---> create-netrc.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins4004523612907137976.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xS1v from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-xS1v/bin to PATH [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins12251408377076890222.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins5554188324843030832.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xS1v from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-xS1v/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash -l /tmp/jenkins1236038063281357498.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xS1v from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-xS1v/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-drools-pdp/2039 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22474 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 880 23774 0 7511 30831 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:71:cc:bc brd ff:ff:ff:ff:ff:ff inet 10.30.106.243/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86068sec preferred_lft 86068sec inet6 fe80::f816:3eff:fe71:ccbc/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ea:c3:9b:2e brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:eaff:fec3:9b2e/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22474) 06/20/25 _x86_64_ (8 CPU) 07:43:51 LINUX RESTART (8 CPU) 07:44:01 tps rtps wtps bread/s bwrtn/s 07:45:01 328.93 73.87 255.06 5325.25 81100.22 07:46:01 468.14 20.01 448.13 2257.22 211710.05 07:47:01 339.24 2.68 336.56 419.26 73467.36 07:48:01 247.08 0.42 246.67 36.52 69475.37 07:49:01 73.70 1.40 72.30 109.72 2393.73 Average: 291.42 19.68 271.74 1629.54 87628.74 07:44:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:45:01 30102584 31633572 2836636 8.61 67040 1775048 1503420 4.42 912652 1633816 158452 07:46:01 25020892 31660672 7918328 24.04 146432 6577544 1688336 4.97 987600 6354760 653552 07:47:01 22787748 29726332 10151472 30.82 162200 6860488 8327736 24.50 3144624 6346120 2308 07:48:01 22213736 29690884 10725484 32.56 201976 7306812 8380048 24.66 3280140 6725784 92 07:49:01 24356452 31587120 8582768 26.06 203688 7054336 1704148 5.01 1445740 6492616 16988 Average: 24896282 30859716 8042938 24.42 156267 5914846 4320738 12.71 1954151 5510619 166278 07:44:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 07:45:01 ens3 527.73 335.24 1660.99 81.11 0.00 0.00 0.00 0.00 07:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:45:01 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 07:46:01 ens3 1301.10 747.01 37642.89 62.42 0.00 0.00 0.00 0.00 07:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:46:01 lo 13.60 13.60 1.25 1.25 0.00 0.00 0.00 0.00 07:46:01 br-816a9eee0ae5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:47:01 veth87ef437 4.05 5.62 0.71 0.83 0.00 0.00 0.00 0.00 07:47:01 ens3 67.14 51.32 310.05 4.43 0.00 0.00 0.00 0.00 07:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:47:01 vethbb898d7 49.38 63.97 3.90 309.60 0.00 0.00 0.00 0.03 07:48:01 veth87ef437 0.17 0.38 0.01 0.03 0.00 0.00 0.00 0.00 07:48:01 ens3 230.01 150.38 2197.03 12.26 0.00 0.00 0.00 0.00 07:48:01 docker0 119.69 175.11 7.79 1347.65 0.00 0.00 0.00 0.00 07:48:01 vethbb898d7 0.50 0.38 0.03 0.02 0.00 0.00 0.00 0.00 07:49:01 ens3 49.54 42.39 69.16 34.28 0.00 0.00 0.00 0.00 07:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:49:01 lo 24.90 24.90 2.26 2.26 0.00 0.00 0.00 0.00 Average: ens3 435.10 265.27 8375.82 38.90 0.00 0.00 0.00 0.00 Average: docker0 23.94 35.03 1.56 269.57 0.00 0.00 0.00 0.00 Average: lo 4.38 4.38 0.40 0.40 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22474) 06/20/25 _x86_64_ (8 CPU) 07:43:51 LINUX RESTART (8 CPU) 07:44:01 CPU %user %nice %system %iowait %steal %idle 07:45:01 all 9.29 0.00 1.32 2.95 0.04 86.40 07:45:01 0 10.34 0.00 0.97 0.94 0.03 87.72 07:45:01 1 22.30 0.00 2.76 6.62 0.07 68.26 07:45:01 2 12.29 0.00 1.29 0.43 0.05 85.94 07:45:01 3 5.67 0.00 0.95 5.74 0.03 87.61 07:45:01 4 5.94 0.00 2.00 0.40 0.05 91.61 07:45:01 5 11.66 0.00 1.29 6.97 0.03 80.05 07:45:01 6 3.76 0.00 0.58 1.35 0.02 94.29 07:45:01 7 2.37 0.00 0.70 1.19 0.03 95.70 07:46:01 all 18.07 0.00 7.51 7.40 0.08 66.93 07:46:01 0 14.00 0.00 7.39 2.53 0.05 76.03 07:46:01 1 14.61 0.00 7.79 11.26 0.07 66.27 07:46:01 2 33.97 0.00 9.19 2.52 0.10 54.22 07:46:01 3 15.40 0.00 7.07 3.01 0.07 74.44 07:46:01 4 17.76 0.00 7.33 23.16 0.08 51.67 07:46:01 5 14.44 0.00 7.20 1.69 0.08 76.59 07:46:01 6 18.42 0.00 8.37 11.92 0.08 61.20 07:46:01 7 16.00 0.00 5.80 3.25 0.13 74.81 07:47:01 all 28.24 0.00 3.81 2.42 0.08 65.45 07:47:01 0 21.64 0.00 3.31 0.75 0.08 74.21 07:47:01 1 33.14 0.00 4.51 0.75 0.08 61.51 07:47:01 2 25.98 0.00 3.62 8.78 0.08 61.54 07:47:01 3 33.99 0.00 3.97 0.20 0.08 61.76 07:47:01 4 29.97 0.00 3.82 5.11 0.10 61.01 07:47:01 5 27.48 0.00 3.30 0.65 0.07 68.50 07:47:01 6 26.14 0.00 3.93 2.40 0.08 67.45 07:47:01 7 27.61 0.00 4.04 0.69 0.08 67.58 07:48:01 all 8.09 0.00 2.58 2.32 0.06 86.95 07:48:01 0 7.54 0.00 2.23 2.18 0.05 88.01 07:48:01 1 8.99 0.00 2.44 4.57 0.07 83.94 07:48:01 2 11.35 0.00 2.75 3.17 0.05 82.69 07:48:01 3 6.24 0.00 3.04 6.73 0.05 83.94 07:48:01 4 7.69 0.00 2.86 0.32 0.05 89.08 07:48:01 5 9.24 0.00 3.09 0.42 0.05 87.20 07:48:01 6 5.50 0.00 1.71 0.15 0.05 92.59 07:48:01 7 8.19 0.00 2.52 1.03 0.07 88.19 07:49:01 all 5.66 0.00 0.87 0.25 0.03 93.19 07:49:01 0 4.27 0.00 0.86 1.11 0.03 93.74 07:49:01 1 28.93 0.00 1.53 0.20 0.05 69.28 07:49:01 2 1.32 0.00 0.79 0.13 0.03 97.73 07:49:01 3 1.64 0.00 0.80 0.03 0.03 97.49 07:49:01 4 1.44 0.00 0.68 0.23 0.03 97.61 07:49:01 5 1.84 0.00 0.83 0.12 0.03 97.18 07:49:01 6 4.34 0.00 0.84 0.07 0.02 94.74 07:49:01 7 1.39 0.00 0.67 0.18 0.02 97.74 Average: all 13.86 0.00 3.21 3.06 0.06 79.81 Average: 0 11.56 0.00 2.95 1.50 0.05 83.94 Average: 1 21.61 0.00 3.80 4.67 0.07 69.85 Average: 2 16.95 0.00 3.52 3.00 0.06 76.47 Average: 3 12.58 0.00 3.16 3.14 0.05 81.07 Average: 4 12.53 0.00 3.32 5.80 0.06 78.29 Average: 5 12.92 0.00 3.14 1.97 0.05 81.92 Average: 6 11.60 0.00 3.07 3.16 0.05 82.11 Average: 7 11.10 0.00 2.74 1.27 0.07 84.83