Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21077 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-h0Mfu8WOSLhT/agent.2102 SSH_AGENT_PID=2104 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_10514335084917989988.key (/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_10514335084917989988.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 Commit message: "Remove VFC from docker compose and helm configurations" > git rev-list --no-walk 1e361efcd8a4b3caab4f41f34078024e85ac9d73 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins1242324321359589103.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-28xT lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-28xT/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-28xT/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh /tmp/jenkins17733099768497877166.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh -xe /tmp/jenkins12508723752994888124.sh + /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 73.7M 0 --:--:-- --:--:-- --:--:-- 73.7M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp using postgres + Grafana/Prometheus pap Pulling api Pulling kafka Pulling zookeeper Pulling policy-db-migrator Pulling postgres Pulling drools-pdp Pulling grafana Pulling prometheus Pulling da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer 7ce9630189bb Pulling fs layer 2d7f854c01cf Pulling fs layer 8e665a4a2af9 Pulling fs layer 219d845251ba Pulling fs layer 2d7f854c01cf Waiting 8e665a4a2af9 Waiting 219d845251ba Waiting 7ce9630189bb Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer f61a19743345 Waiting e60d9caeb0b8 Waiting 8af57d8c9f49 Waiting c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer e032d0a5e409 Waiting b176d7edde70 Pulling fs layer d223479d7367 Downloading [> ] 80.82kB/6.742MB c49e0ee60bfb Waiting 055b9255fa03 Waiting b176d7edde70 Waiting 384497dbce3b Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 9fa9226be034 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting 7df673c7455d Waiting 7221d93db8a9 Waiting 1ccde423731d Waiting 44986281b8b9 Waiting 408012a7b118 Waiting bf70c5107ab5 Waiting 1617e25568b2 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 2d429b9e73a6 Waiting 7e568a0dc8fb Pulling fs layer ed54a7dee1d8 Waiting 46eab5b44a35 Waiting 12c5c803443f Waiting c4d302cc468d Waiting a83b68436f09 Waiting e27c75a98748 Waiting 787d6bee9571 Waiting e73cb4a42719 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting 531ee2cf3c0c Waiting 01e0882c90d9 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer 55f2b468da67 Waiting e040ea11fa10 Pulling fs layer 82bfc142787e Waiting 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting 1e017ebebdbd Waiting 40a5eed61bb0 Waiting 5cfb27c10ea5 Waiting c0c90eeb8aca Waiting 356f5c2c843b Waiting 09d5a3f70313 Waiting 4ba79830ebce Downloading [> ] 539.6kB/166.8MB da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB 7ce9630189bb Downloading [> ] 326.6kB/31.04MB d223479d7367 Verifying Checksum d223479d7367 Download complete 2d7f854c01cf Downloading [==================================================>] 372B/372B 2d7f854c01cf Download complete 4ba79830ebce Downloading [===> ] 10.81MB/166.8MB 8e665a4a2af9 Downloading [> ] 539.6kB/107.2MB da9db072f522 Extracting [=========> ] 720.9kB/3.624MB 7ce9630189bb Downloading [============> ] 7.798MB/31.04MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 4ba79830ebce Downloading [=======> ] 25.95MB/166.8MB 8e665a4a2af9 Downloading [==> ] 5.406MB/107.2MB 7ce9630189bb Downloading [============================> ] 17.76MB/31.04MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer eca0188f477e Waiting 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer e444bcd4d577 Waiting f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer eabd8714fec9 Waiting 8f10199ed94b Waiting 10f05dd8b1db Waiting f3a82e9f1761 Waiting f963a77d2726 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 79161a3f5362 Waiting da3ed5db7103 Waiting 41dac8b43ba6 Waiting c955f6e31a04 Waiting da9db072f522 Pull complete 4ba79830ebce Downloading [============> ] 41.09MB/166.8MB 8e665a4a2af9 Downloading [=====> ] 11.89MB/107.2MB 7ce9630189bb Downloading [================================================> ] 30.21MB/31.04MB 7ce9630189bb Verifying Checksum 7ce9630189bb Download complete 4ba79830ebce Downloading [=================> ] 56.77MB/166.8MB 219d845251ba Downloading [> ] 539.6kB/108.2MB da9db072f522 Already exists 56aca8a42329 Pulling fs layer fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer f243361b999b Pulling fs layer 7abf0dc59d35 Pulling fs layer 991de477d40a Pulling fs layer 5efc16ba9cdc Pulling fs layer 991de477d40a Waiting b56567b07821 Waiting 5efc16ba9cdc Waiting 56aca8a42329 Waiting fbe227156a9a Waiting 7abf0dc59d35 Waiting f243361b999b Waiting 8e665a4a2af9 Downloading [==========> ] 22.71MB/107.2MB 4ba79830ebce Downloading [=====================> ] 71.37MB/166.8MB 219d845251ba Downloading [=> ] 3.243MB/108.2MB da9db072f522 Already exists 8e665a4a2af9 Downloading [=================> ] 36.76MB/107.2MB 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer 96e38c8865ba Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting e5d7009d9e55 Waiting 1ec5fb03eaee Waiting d3165a332ae3 Waiting da9db072f522 Already exists 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 96e38c8865ba Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting 5e06c6bed798 Waiting 684be6598fc9 Waiting 4ba79830ebce Downloading [=========================> ] 86.51MB/166.8MB 219d845251ba Downloading [===> ] 7.028MB/108.2MB 8e665a4a2af9 Downloading [========================> ] 52.44MB/107.2MB 4ba79830ebce Downloading [==============================> ] 102.2MB/166.8MB 219d845251ba Downloading [====> ] 10.81MB/108.2MB 8e665a4a2af9 Downloading [================================> ] 68.66MB/107.2MB 4ba79830ebce Downloading [===================================> ] 118.4MB/166.8MB 219d845251ba Downloading [======> ] 14.6MB/108.2MB 8e665a4a2af9 Downloading [=======================================> ] 84.88MB/107.2MB 4ba79830ebce Downloading [========================================> ] 135.2MB/166.8MB 219d845251ba Downloading [==========> ] 22.17MB/108.2MB 8e665a4a2af9 Downloading [===============================================> ] 101.1MB/107.2MB 8e665a4a2af9 Verifying Checksum 8e665a4a2af9 Download complete 4ba79830ebce Downloading [============================================> ] 149.8MB/166.8MB 219d845251ba Downloading [=============> ] 29.2MB/108.2MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 4ba79830ebce Downloading [=================================================> ] 164.9MB/166.8MB f18232174bc9 Downloading [================> ] 1.179MB/3.642MB 4ba79830ebce Download complete 219d845251ba Downloading [===================> ] 42.71MB/108.2MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Verifying Checksum e60d9caeb0b8 Download complete f61a19743345 Downloading [> ] 48.06kB/3.524MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 219d845251ba Downloading [===========================> ] 59.47MB/108.2MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB f61a19743345 Downloading [==============================> ] 2.162MB/3.524MB f18232174bc9 Extracting [==================> ] 1.376MB/3.642MB 8af57d8c9f49 Downloading [===========> ] 2.063MB/8.735MB 219d845251ba Downloading [==================================> ] 75.15MB/108.2MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Verifying Checksum f61a19743345 Download complete f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Download complete 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Download complete c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB f18232174bc9 Pull complete 8af57d8c9f49 Downloading [================================================> ] 8.551MB/8.735MB 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B 384497dbce3b Downloading [> ] 539.6kB/63.48MB 219d845251ba Downloading [=========================================> ] 89.75MB/108.2MB 4ba79830ebce Extracting [===> ] 12.81MB/166.8MB c49e0ee60bfb Downloading [====> ] 9.19MB/107.3MB 219d845251ba Downloading [=================================================> ] 106.5MB/108.2MB 384497dbce3b Downloading [=====> ] 7.568MB/63.48MB 219d845251ba Verifying Checksum 219d845251ba Download complete e60d9caeb0b8 Pull complete f61a19743345 Extracting [> ] 65.54kB/3.524MB 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 055b9255fa03 Verifying Checksum 055b9255fa03 Download complete 4ba79830ebce Extracting [=======> ] 24.51MB/166.8MB b176d7edde70 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB c49e0ee60bfb Downloading [=========> ] 21.09MB/107.3MB 384497dbce3b Downloading [============> ] 15.68MB/63.48MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB f61a19743345 Extracting [====> ] 327.7kB/3.524MB 4ba79830ebce Extracting [===========> ] 38.44MB/166.8MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB c49e0ee60bfb Downloading [=================> ] 36.76MB/107.3MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 384497dbce3b Downloading [==================> ] 23.25MB/63.48MB f61a19743345 Extracting [============================================> ] 3.146MB/3.524MB 4ba79830ebce Extracting [===============> ] 51.25MB/166.8MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB c49e0ee60bfb Downloading [=======================> ] 50.82MB/107.3MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 384497dbce3b Downloading [========================> ] 30.82MB/63.48MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 4ba79830ebce Extracting [===================> ] 63.5MB/166.8MB c49e0ee60bfb Downloading [==============================> ] 65.96MB/107.3MB f61a19743345 Pull complete 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 6ac0e4adf315 Downloading [====> ] 5.406MB/62.07MB 384497dbce3b Downloading [==================================> ] 43.25MB/63.48MB 4ba79830ebce Extracting [======================> ] 74.65MB/166.8MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB c49e0ee60bfb Downloading [======================================> ] 82.72MB/107.3MB 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB 384497dbce3b Downloading [===============================================> ] 60.01MB/63.48MB 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 4ba79830ebce Extracting [=======================> ] 79.66MB/166.8MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB c49e0ee60bfb Downloading [=============================================> ] 97.32MB/107.3MB 6ac0e4adf315 Downloading [===================> ] 23.79MB/62.07MB 8af57d8c9f49 Extracting [====================> ] 3.637MB/8.735MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete 4ba79830ebce Extracting [==========================> ] 86.9MB/166.8MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete f3b09c502777 Downloading [===> ] 3.784MB/56.52MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 6ac0e4adf315 Downloading [==============================> ] 37.85MB/62.07MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete 8af57d8c9f49 Extracting [=======================================> ] 6.98MB/8.735MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 4ba79830ebce Extracting [===========================> ] 91.91MB/166.8MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB f3b09c502777 Downloading [=============> ] 15.68MB/56.52MB 6ac0e4adf315 Downloading [============================================> ] 55.15MB/62.07MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 2d429b9e73a6 Downloading [========> ] 5.012MB/29.13MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete 4ba79830ebce Extracting [============================> ] 96.37MB/166.8MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB f3b09c502777 Downloading [========================> ] 28.11MB/56.52MB 1617e25568b2 Pull complete 2d429b9e73a6 Downloading [==================> ] 10.91MB/29.13MB 4ba79830ebce Extracting [==============================> ] 101.9MB/166.8MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete f3b09c502777 Downloading [====================================> ] 41.09MB/56.52MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 8af57d8c9f49 Pull complete 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 2d429b9e73a6 Downloading [==================================> ] 20.05MB/29.13MB 4ba79830ebce Extracting [===============================> ] 106.4MB/166.8MB c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB f3b09c502777 Downloading [================================================> ] 55.15MB/56.52MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 531ee2cf3c0c Downloading [==============================================> ] 7.536MB/8.066MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 4ba79830ebce Extracting [=================================> ] 110.9MB/166.8MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 6ac0e4adf315 Extracting [====> ] 5.014MB/62.07MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB e73cb4a42719 Downloading [====> ] 10.27MB/109.1MB 4ba79830ebce Extracting [==================================> ] 114.2MB/166.8MB 1e017ebebdbd Downloading [=========> ] 6.782MB/37.19MB c53a11b7c6fc Pull complete e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB 2d429b9e73a6 Extracting [======> ] 3.834MB/29.13MB e73cb4a42719 Downloading [==========> ] 23.79MB/109.1MB 4ba79830ebce Extracting [===================================> ] 117MB/166.8MB 1e017ebebdbd Downloading [====================> ] 15.45MB/37.19MB 55f2b468da67 Downloading [==> ] 11.89MB/257.9MB 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB e032d0a5e409 Pull complete 2d429b9e73a6 Extracting [============> ] 7.373MB/29.13MB e73cb4a42719 Downloading [=================> ] 37.31MB/109.1MB 1e017ebebdbd Downloading [================================> ] 24.49MB/37.19MB 4ba79830ebce Extracting [====================================> ] 120.3MB/166.8MB 55f2b468da67 Downloading [===> ] 17.3MB/257.9MB 6ac0e4adf315 Extracting [============> ] 15.04MB/62.07MB c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB e73cb4a42719 Downloading [=========================> ] 54.61MB/109.1MB 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 1e017ebebdbd Downloading [=============================================> ] 33.54MB/37.19MB 4ba79830ebce Extracting [====================================> ] 123.1MB/166.8MB 55f2b468da67 Downloading [====> ] 22.71MB/257.9MB 1e017ebebdbd Download complete 6ac0e4adf315 Extracting [=============> ] 17.27MB/62.07MB 2d429b9e73a6 Extracting [=====================> ] 12.39MB/29.13MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB e73cb4a42719 Downloading [=============================> ] 65.42MB/109.1MB c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB 4ba79830ebce Extracting [=====================================> ] 125.9MB/166.8MB 55f2b468da67 Downloading [=====> ] 29.2MB/257.9MB 6ac0e4adf315 Extracting [==================> ] 22.84MB/62.07MB 82bfc142787e Downloading [==========================> ] 4.521MB/8.613MB e73cb4a42719 Downloading [===================================> ] 76.77MB/109.1MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 2d429b9e73a6 Extracting [==========================> ] 15.34MB/29.13MB c49e0ee60bfb Extracting [==> ] 6.128MB/107.3MB 4ba79830ebce Extracting [======================================> ] 128.1MB/166.8MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 55f2b468da67 Downloading [=======> ] 36.76MB/257.9MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB e73cb4a42719 Downloading [========================================> ] 88.13MB/109.1MB 1e017ebebdbd Extracting [=====> ] 3.932MB/37.19MB 2d429b9e73a6 Extracting [=================================> ] 19.46MB/29.13MB c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 4ba79830ebce Extracting [=======================================> ] 131.5MB/166.8MB 55f2b468da67 Downloading [========> ] 45.42MB/257.9MB 6ac0e4adf315 Extracting [=====================> ] 26.18MB/62.07MB e73cb4a42719 Downloading [=============================================> ] 98.4MB/109.1MB 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB b0e0ef7895f4 Downloading [====> ] 3.39MB/37.01MB c49e0ee60bfb Extracting [=====> ] 11.7MB/107.3MB 4ba79830ebce Extracting [=======================================> ] 133.1MB/166.8MB 55f2b468da67 Downloading [==========> ] 55.15MB/257.9MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 2d429b9e73a6 Extracting [=========================================> ] 24.18MB/29.13MB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 1e017ebebdbd Extracting [==============> ] 10.62MB/37.19MB b0e0ef7895f4 Downloading [=========> ] 6.782MB/37.01MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete c49e0ee60bfb Extracting [======> ] 13.37MB/107.3MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 55f2b468da67 Downloading [============> ] 64.88MB/257.9MB 4ba79830ebce Extracting [========================================> ] 135.9MB/166.8MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 6ac0e4adf315 Extracting [==========================> ] 33.42MB/62.07MB 1e017ebebdbd Extracting [==================> ] 13.76MB/37.19MB b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB c49e0ee60bfb Extracting [=======> ] 16.15MB/107.3MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 55f2b468da67 Downloading [===============> ] 78.4MB/257.9MB 4ba79830ebce Extracting [=========================================> ] 138.1MB/166.8MB 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB 1e017ebebdbd Extracting [=======================> ] 17.3MB/37.19MB b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB c49e0ee60bfb Extracting [========> ] 17.27MB/107.3MB 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 55f2b468da67 Downloading [=================> ] 91.37MB/257.9MB 4ba79830ebce Extracting [==========================================> ] 141.5MB/166.8MB 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 6ac0e4adf315 Extracting [===========================================> ] 53.48MB/62.07MB 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB b0e0ef7895f4 Downloading [========================> ] 18.09MB/37.01MB 4ba79830ebce Extracting [===========================================> ] 144.3MB/166.8MB 55f2b468da67 Downloading [===================> ] 102.2MB/257.9MB c49e0ee60bfb Extracting [========> ] 18.38MB/107.3MB 09d5a3f70313 Downloading [==> ] 4.865MB/109.2MB b0e0ef7895f4 Downloading [==================================> ] 25.25MB/37.01MB 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 1e017ebebdbd Extracting [================================> ] 24.38MB/37.19MB 55f2b468da67 Downloading [=====================> ] 112.5MB/257.9MB 4ba79830ebce Extracting [============================================> ] 148.2MB/166.8MB c49e0ee60bfb Extracting [=========> ] 21.17MB/107.3MB 09d5a3f70313 Downloading [======> ] 13.52MB/109.2MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB b0e0ef7895f4 Downloading [===============================================> ] 35.42MB/37.01MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 1e017ebebdbd Extracting [====================================> ] 27.13MB/37.19MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 55f2b468da67 Downloading [========================> ] 125.4MB/257.9MB 4ba79830ebce Extracting [=============================================> ] 151.5MB/166.8MB c49e0ee60bfb Extracting [===========> ] 24.51MB/107.3MB 6ac0e4adf315 Pull complete eca0188f477e Downloading [> ] 375.7kB/37.17MB 09d5a3f70313 Downloading [===========> ] 24.87MB/109.2MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 1e017ebebdbd Extracting [========================================> ] 30.28MB/37.19MB 55f2b468da67 Downloading [==========================> ] 136.8MB/257.9MB 4ba79830ebce Extracting [==============================================> ] 154.9MB/166.8MB eca0188f477e Downloading [==========> ] 7.912MB/37.17MB 09d5a3f70313 Downloading [=================> ] 38.93MB/109.2MB c49e0ee60bfb Extracting [=============> ] 28.41MB/107.3MB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 55f2b468da67 Downloading [=============================> ] 151.9MB/257.9MB eca0188f477e Downloading [=======================> ] 17.71MB/37.17MB 4ba79830ebce Extracting [===============================================> ] 157.6MB/166.8MB 09d5a3f70313 Downloading [=======================> ] 51.9MB/109.2MB c49e0ee60bfb Extracting [==============> ] 31.75MB/107.3MB f3b09c502777 Extracting [===> ] 3.899MB/56.52MB 55f2b468da67 Downloading [================================> ] 166MB/257.9MB 1e017ebebdbd Extracting [===============================================> ] 35.39MB/37.19MB eca0188f477e Downloading [======================================> ] 28.64MB/37.17MB 09d5a3f70313 Downloading [=============================> ] 63.8MB/109.2MB c49e0ee60bfb Extracting [================> ] 34.54MB/107.3MB 46eab5b44a35 Pull complete f3b09c502777 Extracting [=====> ] 6.685MB/56.52MB c4d302cc468d Extracting [> ] 65.54kB/4.534MB 4ba79830ebce Extracting [===============================================> ] 159.9MB/166.8MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 55f2b468da67 Downloading [==================================> ] 177.9MB/257.9MB eca0188f477e Verifying Checksum eca0188f477e Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete 09d5a3f70313 Downloading [===================================> ] 77.32MB/109.2MB eabd8714fec9 Downloading [> ] 539.6kB/375MB c49e0ee60bfb Extracting [=================> ] 37.32MB/107.3MB f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 55f2b468da67 Downloading [=====================================> ] 194.1MB/257.9MB 4ba79830ebce Extracting [================================================> ] 163.2MB/166.8MB 1e017ebebdbd Pull complete c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 09d5a3f70313 Downloading [==========================================> ] 92.99MB/109.2MB eca0188f477e Extracting [> ] 393.2kB/37.17MB eabd8714fec9 Downloading [=> ] 9.19MB/375MB c49e0ee60bfb Extracting [==================> ] 40.11MB/107.3MB 55f2b468da67 Downloading [=======================================> ] 203.8MB/257.9MB f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 4ba79830ebce Extracting [=================================================> ] 164.9MB/166.8MB c4d302cc468d Extracting [=========================================> ] 3.736MB/4.534MB 09d5a3f70313 Downloading [================================================> ] 104.9MB/109.2MB eabd8714fec9 Downloading [==> ] 19.46MB/375MB eca0188f477e Extracting [=====> ] 3.932MB/37.17MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB c49e0ee60bfb Extracting [===================> ] 42.34MB/107.3MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 45fd2fec8a19 Download complete 55f2b468da67 Downloading [=========================================> ] 215.2MB/257.9MB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB c4d302cc468d Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB eabd8714fec9 Downloading [====> ] 34.6MB/375MB c49e0ee60bfb Extracting [====================> ] 44.56MB/107.3MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB 8f10199ed94b Downloading [==============> ] 2.457MB/8.768MB f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB eabd8714fec9 Downloading [=====> ] 44.87MB/375MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB c49e0ee60bfb Extracting [======================> ] 47.35MB/107.3MB 55f2b468da67 Downloading [==============================================> ] 239MB/257.9MB 4ba79830ebce Pull complete 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB d223479d7367 Extracting [> ] 98.3kB/6.742MB 8f10199ed94b Downloading [============================> ] 5.012MB/8.768MB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB eabd8714fec9 Downloading [=======> ] 57.85MB/375MB 01e0882c90d9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB c49e0ee60bfb Extracting [=======================> ] 49.58MB/107.3MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 55f2b468da67 Downloading [=================================================> ] 254.1MB/257.9MB d223479d7367 Extracting [==> ] 393.2kB/6.742MB 8f10199ed94b Downloading [========================================> ] 7.077MB/8.768MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete eca0188f477e Extracting [================> ] 12.58MB/37.17MB eabd8714fec9 Downloading [=========> ] 69.2MB/375MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB f3b09c502777 Extracting [=====================> ] 24.51MB/56.52MB eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB 9c266ba63f51 Download complete d223479d7367 Extracting [===============> ] 2.064MB/6.742MB eabd8714fec9 Downloading [===========> ] 84.34MB/375MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 531ee2cf3c0c Extracting [=======================> ] 3.834MB/8.066MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 55f2b468da67 Extracting [> ] 557.1kB/257.9MB c49e0ee60bfb Extracting [=========================> ] 54.03MB/107.3MB f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB f3b09c502777 Extracting [========================> ] 27.3MB/56.52MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eca0188f477e Extracting [=========================> ] 18.87MB/37.17MB d223479d7367 Extracting [======================> ] 3.047MB/6.742MB eabd8714fec9 Downloading [=============> ] 97.86MB/375MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 531ee2cf3c0c Extracting [===============================> ] 5.014MB/8.066MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 55f2b468da67 Extracting [=> ] 8.356MB/257.9MB f3a82e9f1761 Downloading [========> ] 7.798MB/44.41MB f3b09c502777 Extracting [===============================> ] 35.65MB/56.52MB c49e0ee60bfb Extracting [==========================> ] 56.82MB/107.3MB eca0188f477e Extracting [============================> ] 21.23MB/37.17MB eabd8714fec9 Downloading [==============> ] 107.6MB/375MB d223479d7367 Extracting [===============================> ] 4.227MB/6.742MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 531ee2cf3c0c Extracting [===========================================> ] 6.98MB/8.066MB 55f2b468da67 Extracting [===> ] 16.15MB/257.9MB f3a82e9f1761 Downloading [=============> ] 12.39MB/44.41MB f3b09c502777 Extracting [=======================================> ] 45.12MB/56.52MB c49e0ee60bfb Extracting [===========================> ] 59.6MB/107.3MB eabd8714fec9 Downloading [===============> ] 118.4MB/375MB eca0188f477e Extracting [================================> ] 24.38MB/37.17MB d223479d7367 Extracting [=======================================> ] 5.308MB/6.742MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB f3a82e9f1761 Downloading [==================> ] 16.06MB/44.41MB da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB c49e0ee60bfb Extracting [============================> ] 61.83MB/107.3MB f3b09c502777 Extracting [=============================================> ] 51.25MB/56.52MB eabd8714fec9 Downloading [=================> ] 128.1MB/375MB d223479d7367 Extracting [================================================> ] 6.488MB/6.742MB eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB f3a82e9f1761 Downloading [=======================> ] 21.1MB/44.41MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB c49e0ee60bfb Extracting [==============================> ] 64.62MB/107.3MB eabd8714fec9 Downloading [==================> ] 140MB/375MB d223479d7367 Pull complete ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB eca0188f477e Extracting [=======================================> ] 29.49MB/37.17MB f3a82e9f1761 Downloading [=============================> ] 26.15MB/44.41MB ed54a7dee1d8 Pull complete c49e0ee60bfb Extracting [===============================> ] 66.85MB/107.3MB da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 12c5c803443f Extracting [==================================================>] 116B/116B eabd8714fec9 Downloading [====================> ] 153.5MB/375MB 12c5c803443f Extracting [==================================================>] 116B/116B eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 7ce9630189bb Extracting [> ] 327.7kB/31.04MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB f3a82e9f1761 Downloading [======================================> ] 33.95MB/44.41MB f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Downloading [======================> ] 166.5MB/375MB da3ed5db7103 Downloading [=====> ] 14.06MB/127.4MB 7ce9630189bb Extracting [====> ] 2.949MB/31.04MB c49e0ee60bfb Extracting [================================> ] 69.07MB/107.3MB 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB f3a82e9f1761 Downloading [==============================================> ] 41.29MB/44.41MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB da3ed5db7103 Downloading [=========> ] 24.33MB/127.4MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eabd8714fec9 Downloading [=======================> ] 179MB/375MB c49e0ee60bfb Extracting [=================================> ] 71.86MB/107.3MB 7ce9630189bb Extracting [======> ] 3.932MB/31.04MB 55f2b468da67 Extracting [=======> ] 37.32MB/257.9MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB da3ed5db7103 Downloading [==============> ] 37.31MB/127.4MB eabd8714fec9 Downloading [=========================> ] 189.2MB/375MB 7ce9630189bb Extracting [========> ] 5.243MB/31.04MB 55f2b468da67 Extracting [========> ] 46.24MB/257.9MB c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB e27c75a98748 Pull complete eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B da3ed5db7103 Downloading [====================> ] 52.98MB/127.4MB eabd8714fec9 Downloading [===========================> ] 203.8MB/375MB 44986281b8b9 Pull complete 55f2b468da67 Extracting [==========> ] 54.03MB/257.9MB 7ce9630189bb Extracting [==========> ] 6.554MB/31.04MB c49e0ee60bfb Extracting [===================================> ] 76.32MB/107.3MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB eabd8714fec9 Downloading [============================> ] 217.3MB/375MB e444bcd4d577 Pull complete 55f2b468da67 Extracting [============> ] 62.95MB/257.9MB 7ce9630189bb Extracting [============> ] 7.864MB/31.04MB c49e0ee60bfb Extracting [====================================> ] 78.54MB/107.3MB eabd8714fec9 Downloading [==============================> ] 228.7MB/375MB bf70c5107ab5 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB e73cb4a42719 Extracting [=> ] 3.899MB/109.1MB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 7ce9630189bb Extracting [===============> ] 9.503MB/31.04MB 55f2b468da67 Extracting [=============> ] 70.19MB/257.9MB c49e0ee60bfb Extracting [=====================================> ] 80.77MB/107.3MB eabd8714fec9 Downloading [================================> ] 240.1MB/375MB e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB 7ce9630189bb Extracting [====================> ] 12.78MB/31.04MB 1ccde423731d Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B 55f2b468da67 Extracting [===============> ] 79.66MB/257.9MB c49e0ee60bfb Extracting [=======================================> ] 84.12MB/107.3MB eabd8714fec9 Downloading [==================================> ] 255.7MB/375MB e73cb4a42719 Extracting [====> ] 10.58MB/109.1MB 7ce9630189bb Extracting [==============================> ] 19.01MB/31.04MB 55f2b468da67 Extracting [================> ] 85.79MB/257.9MB da3ed5db7103 Downloading [========================> ] 62.72MB/127.4MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete c49e0ee60bfb Extracting [========================================> ] 87.46MB/107.3MB eabd8714fec9 Downloading [===================================> ] 266.6MB/375MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 55f2b468da67 Extracting [==================> ] 94.14MB/257.9MB 7ce9630189bb Extracting [==================================> ] 21.3MB/31.04MB da3ed5db7103 Downloading [============================> ] 73.53MB/127.4MB c49e0ee60bfb Extracting [==========================================> ] 91.36MB/107.3MB eabd8714fec9 Downloading [=====================================> ] 277.9MB/375MB e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 7ce9630189bb Extracting [====================================> ] 22.94MB/31.04MB 56aca8a42329 Downloading [==> ] 3.243MB/71.91MB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB da3ed5db7103 Downloading [================================> ] 83.26MB/127.4MB c49e0ee60bfb Extracting [===========================================> ] 94.14MB/107.3MB eabd8714fec9 Downloading [======================================> ] 289.8MB/375MB 7df673c7455d Pull complete e73cb4a42719 Extracting [=========> ] 20.05MB/109.1MB prometheus Pulled 56aca8a42329 Downloading [======> ] 8.65MB/71.91MB da3ed5db7103 Downloading [====================================> ] 93.54MB/127.4MB 55f2b468da67 Extracting [====================> ] 107MB/257.9MB eabd8714fec9 Downloading [========================================> ] 301.2MB/375MB c49e0ee60bfb Extracting [=============================================> ] 97.48MB/107.3MB e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 7ce9630189bb Extracting [========================================> ] 24.9MB/31.04MB 56aca8a42329 Downloading [===========> ] 16.22MB/71.91MB da3ed5db7103 Downloading [==========================================> ] 109.2MB/127.4MB 55f2b468da67 Extracting [=====================> ] 109.2MB/257.9MB eabd8714fec9 Downloading [=========================================> ] 314.1MB/375MB c49e0ee60bfb Extracting [==============================================> ] 100.3MB/107.3MB 56aca8a42329 Downloading [==================> ] 26.49MB/71.91MB e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB 7ce9630189bb Extracting [============================================> ] 27.53MB/31.04MB da3ed5db7103 Downloading [===============================================> ] 120MB/127.4MB eabd8714fec9 Downloading [===========================================> ] 325.5MB/375MB 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB c49e0ee60bfb Extracting [===============================================> ] 101.9MB/107.3MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 56aca8a42329 Downloading [==========================> ] 37.85MB/71.91MB e73cb4a42719 Extracting [============> ] 27.3MB/109.1MB eabd8714fec9 Downloading [============================================> ] 335.8MB/375MB 7ce9630189bb Extracting [==============================================> ] 28.84MB/31.04MB 55f2b468da67 Extracting [======================> ] 115.9MB/257.9MB 7ce9630189bb Extracting [==================================================>] 31.04MB/31.04MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB 56aca8a42329 Downloading [===================================> ] 50.82MB/71.91MB e73cb4a42719 Extracting [==============> ] 30.64MB/109.1MB eabd8714fec9 Downloading [==============================================> ] 348.2MB/375MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB e73cb4a42719 Extracting [===============> ] 33.98MB/109.1MB eabd8714fec9 Downloading [===============================================> ] 354.7MB/375MB 55f2b468da67 Extracting [=======================> ] 120.9MB/257.9MB c49e0ee60bfb Extracting [=================================================> ] 105.3MB/107.3MB 7ce9630189bb Pull complete e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB eabd8714fec9 Downloading [=================================================> ] 368.7MB/375MB 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 2d7f854c01cf Extracting [==================================================>] 372B/372B 2d7f854c01cf Extracting [==================================================>] 372B/372B e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete fbe227156a9a Downloading [> ] 146.4kB/14.63MB 56aca8a42329 Downloading [=======================================> ] 57.31MB/71.91MB e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB eabd8714fec9 Extracting [> ] 557.1kB/375MB f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Verifying Checksum f243361b999b Download complete fbe227156a9a Downloading [=====================> ] 6.192MB/14.63MB 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Verifying Checksum 7abf0dc59d35 Download complete 56aca8a42329 Downloading [=================================================> ] 70.83MB/71.91MB 56aca8a42329 Verifying Checksum 56aca8a42329 Download complete 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 991de477d40a Verifying Checksum 991de477d40a Download complete 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Download complete 55f2b468da67 Extracting [==========================> ] 138.7MB/257.9MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB eabd8714fec9 Extracting [=> ] 13.93MB/375MB fbe227156a9a Verifying Checksum fbe227156a9a Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 55f2b468da67 Extracting [===========================> ] 142.6MB/257.9MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB eabd8714fec9 Extracting [==> ] 20.05MB/375MB e73cb4a42719 Extracting [=========================> ] 55.71MB/109.1MB 96e38c8865ba Downloading [=======> ] 10.27MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.27MB/71.91MB 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB eabd8714fec9 Extracting [==> ] 21.17MB/375MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 56aca8a42329 Extracting [=> ] 2.785MB/71.91MB 55f2b468da67 Extracting [============================> ] 144.8MB/257.9MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB e73cb4a42719 Extracting [==========================> ] 58.49MB/109.1MB eabd8714fec9 Extracting [===> ] 23.4MB/375MB 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB 55f2b468da67 Extracting [============================> ] 147.6MB/257.9MB e5d7009d9e55 Downloading [==================================================>] 295B/295B 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 96e38c8865ba Downloading [=========================> ] 37.31MB/71.91MB 96e38c8865ba Downloading [=========================> ] 37.31MB/71.91MB 1ec5fb03eaee Download complete e73cb4a42719 Extracting [============================> ] 61.83MB/109.1MB eabd8714fec9 Extracting [===> ] 25.07MB/375MB 56aca8a42329 Extracting [=====> ] 8.356MB/71.91MB 55f2b468da67 Extracting [=============================> ] 151MB/257.9MB 96e38c8865ba Downloading [===================================> ] 50.82MB/71.91MB 96e38c8865ba Downloading [===================================> ] 50.82MB/71.91MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Download complete e73cb4a42719 Extracting [==============================> ] 65.73MB/109.1MB eabd8714fec9 Extracting [====> ] 35.09MB/375MB 56aca8a42329 Extracting [========> ] 11.7MB/71.91MB 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 96e38c8865ba Downloading [===========================================> ] 62.18MB/71.91MB 96e38c8865ba Downloading [===========================================> ] 62.18MB/71.91MB eabd8714fec9 Extracting [======> ] 46.24MB/375MB e73cb4a42719 Extracting [===============================> ] 69.63MB/109.1MB 56aca8a42329 Extracting [==========> ] 14.48MB/71.91MB 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete c124ba1a8b26 Downloading [=====> ] 9.19MB/91.87MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete 684be6598fc9 Download complete c124ba1a8b26 Downloading [======> ] 11.89MB/91.87MB e73cb4a42719 Extracting [=================================> ] 72.97MB/109.1MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete 55f2b468da67 Extracting [===============================> ] 161MB/257.9MB 56aca8a42329 Extracting [===========> ] 16.71MB/71.91MB eabd8714fec9 Extracting [=======> ] 56.26MB/375MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB c124ba1a8b26 Downloading [===========> ] 20.54MB/91.87MB c49e0ee60bfb Pull complete dcc0c3b2850c Downloading [==> ] 4.324MB/76.12MB 96e38c8865ba Extracting [> ] 1.114MB/71.91MB 96e38c8865ba Extracting [> ] 1.114MB/71.91MB eabd8714fec9 Extracting [========> ] 62.39MB/375MB 55f2b468da67 Extracting [===============================> ] 164.3MB/257.9MB e73cb4a42719 Extracting [==================================> ] 75.76MB/109.1MB 56aca8a42329 Extracting [=============> ] 19.5MB/71.91MB c124ba1a8b26 Downloading [=================> ] 32.44MB/91.87MB dcc0c3b2850c Downloading [========> ] 12.98MB/76.12MB eabd8714fec9 Extracting [=========> ] 69.07MB/375MB 55f2b468da67 Extracting [================================> ] 166.6MB/257.9MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 56aca8a42329 Extracting [===============> ] 22.28MB/71.91MB c124ba1a8b26 Downloading [========================> ] 44.33MB/91.87MB dcc0c3b2850c Downloading [===============> ] 24.33MB/76.12MB eabd8714fec9 Extracting [==========> ] 80.77MB/375MB 55f2b468da67 Extracting [================================> ] 169.9MB/257.9MB e73cb4a42719 Extracting [=====================================> ] 81.33MB/109.1MB 96e38c8865ba Extracting [====> ] 6.685MB/71.91MB 96e38c8865ba Extracting [====> ] 6.685MB/71.91MB 56aca8a42329 Extracting [=================> ] 25.62MB/71.91MB c124ba1a8b26 Downloading [==============================> ] 55.69MB/91.87MB dcc0c3b2850c Downloading [========================> ] 36.76MB/76.12MB eabd8714fec9 Extracting [===========> ] 88.57MB/375MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 56aca8a42329 Extracting [====================> ] 28.97MB/71.91MB c124ba1a8b26 Downloading [=====================================> ] 69.75MB/91.87MB dcc0c3b2850c Downloading [===============================> ] 48.12MB/76.12MB eabd8714fec9 Extracting [============> ] 96.37MB/375MB 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB e73cb4a42719 Extracting [========================================> ] 88.57MB/109.1MB 56aca8a42329 Extracting [=====================> ] 31.2MB/71.91MB c124ba1a8b26 Downloading [============================================> ] 81.1MB/91.87MB dcc0c3b2850c Downloading [====================================> ] 55.69MB/76.12MB eabd8714fec9 Extracting [=============> ] 103.1MB/375MB e73cb4a42719 Extracting [=========================================> ] 90.8MB/109.1MB 96e38c8865ba Extracting [==========> ] 15.6MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.6MB/71.91MB 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB c124ba1a8b26 Downloading [=================================================> ] 91.37MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete dcc0c3b2850c Downloading [============================================> ] 67.58MB/76.12MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [==============> ] 106.4MB/375MB e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 96e38c8865ba Extracting [============> ] 17.27MB/71.91MB 96e38c8865ba Extracting [============> ] 17.27MB/71.91MB dcc0c3b2850c Downloading [==============================================> ] 71.37MB/76.12MB 56aca8a42329 Extracting [=========================> ] 37.32MB/71.91MB 56aca8a42329 Extracting [=============================> ] 41.78MB/71.91MB 56aca8a42329 Extracting [================================> ] 46.24MB/71.91MB dcc0c3b2850c Downloading [===============================================> ] 71.91MB/76.12MB eabd8714fec9 Extracting [==============> ] 107MB/375MB 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 96e38c8865ba Extracting [============> ] 17.83MB/71.91MB 96e38c8865ba Extracting [============> ] 17.83MB/71.91MB e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 56aca8a42329 Extracting [=================================> ] 48.46MB/71.91MB eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 96e38c8865ba Extracting [==============> ] 21.17MB/71.91MB 96e38c8865ba Extracting [==============> ] 21.17MB/71.91MB e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 56aca8a42329 Extracting [===================================> ] 51.25MB/71.91MB eabd8714fec9 Extracting [===============> ] 113.6MB/375MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 56aca8a42329 Extracting [======================================> ] 55.15MB/71.91MB 2d7f854c01cf Pull complete 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB eabd8714fec9 Extracting [===============> ] 117.5MB/375MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 56aca8a42329 Extracting [========================================> ] 57.93MB/71.91MB 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB 56aca8a42329 Extracting [=========================================> ] 60.16MB/71.91MB 8e665a4a2af9 Extracting [> ] 557.1kB/107.2MB 384497dbce3b Extracting [> ] 557.1kB/63.48MB 55f2b468da67 Extracting [===================================> ] 181MB/257.9MB eabd8714fec9 Extracting [================> ] 122MB/375MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 56aca8a42329 Extracting [===========================================> ] 62.39MB/71.91MB 8e665a4a2af9 Extracting [====> ] 10.03MB/107.2MB 55f2b468da67 Extracting [===================================> ] 183.8MB/257.9MB eabd8714fec9 Extracting [================> ] 125.3MB/375MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 56aca8a42329 Extracting [==============================================> ] 66.85MB/71.91MB 8e665a4a2af9 Extracting [========> ] 17.83MB/107.2MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 56aca8a42329 Extracting [================================================> ] 69.63MB/71.91MB 8e665a4a2af9 Extracting [============> ] 26.18MB/107.2MB 384497dbce3b Extracting [> ] 1.114MB/63.48MB e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 55f2b468da67 Extracting [====================================> ] 189.4MB/257.9MB eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 8e665a4a2af9 Extracting [================> ] 34.54MB/107.2MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 56aca8a42329 Extracting [=================================================> ] 71.86MB/71.91MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 55f2b468da67 Extracting [=====================================> ] 192.7MB/257.9MB eabd8714fec9 Extracting [=================> ] 130.9MB/375MB 8e665a4a2af9 Extracting [====================> ] 42.89MB/107.2MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 8e665a4a2af9 Extracting [==========================> ] 56.82MB/107.2MB eabd8714fec9 Extracting [=================> ] 134.8MB/375MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 8e665a4a2af9 Extracting [=============================> ] 63.5MB/107.2MB eabd8714fec9 Extracting [==================> ] 137MB/375MB 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 8e665a4a2af9 Extracting [==================================> ] 74.09MB/107.2MB eabd8714fec9 Extracting [==================> ] 140.4MB/375MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 8e665a4a2af9 Extracting [=====================================> ] 81.33MB/107.2MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 56aca8a42329 Pull complete 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 8e665a4a2af9 Extracting [========================================> ] 85.79MB/107.2MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB eabd8714fec9 Extracting [===================> ] 143.2MB/375MB fbe227156a9a Extracting [> ] 163.8kB/14.63MB 8e665a4a2af9 Extracting [=========================================> ] 89.69MB/107.2MB 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 8e665a4a2af9 Extracting [=============================================> ] 96.93MB/107.2MB eabd8714fec9 Extracting [===================> ] 145.4MB/375MB fbe227156a9a Extracting [=> ] 327.7kB/14.63MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB 8e665a4a2af9 Extracting [=================================================> ] 106.4MB/107.2MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 8e665a4a2af9 Extracting [==================================================>] 107.2MB/107.2MB eabd8714fec9 Extracting [===================> ] 147.1MB/375MB fbe227156a9a Extracting [============> ] 3.768MB/14.63MB e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB eabd8714fec9 Extracting [===================> ] 148.7MB/375MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB fbe227156a9a Extracting [=================> ] 5.079MB/14.63MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB eabd8714fec9 Extracting [====================> ] 151.5MB/375MB fbe227156a9a Extracting [===================> ] 5.571MB/14.63MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB fbe227156a9a Extracting [========================> ] 7.045MB/14.63MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB fbe227156a9a Extracting [==========================> ] 7.7MB/14.63MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 384497dbce3b Extracting [====> ] 5.571MB/63.48MB 384497dbce3b Extracting [====> ] 6.128MB/63.48MB eabd8714fec9 Extracting [=====================> ] 158.8MB/375MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB fbe227156a9a Extracting [===========================> ] 8.192MB/14.63MB 8e665a4a2af9 Pull complete e73cb4a42719 Pull complete eabd8714fec9 Extracting [=====================> ] 162.1MB/375MB fbe227156a9a Extracting [=============================> ] 8.52MB/14.63MB 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [======================> ] 165.4MB/375MB fbe227156a9a Extracting [=====================================> ] 10.98MB/14.63MB eabd8714fec9 Extracting [======================> ] 171MB/375MB fbe227156a9a Extracting [=======================================> ] 11.47MB/14.63MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [=======================> ] 177.7MB/375MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B fbe227156a9a Extracting [========================================> ] 11.8MB/14.63MB eabd8714fec9 Extracting [========================> ] 183.3MB/375MB 219d845251ba Extracting [> ] 557.1kB/108.2MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB a83b68436f09 Pull complete eabd8714fec9 Extracting [=========================> ] 188.8MB/375MB fbe227156a9a Extracting [=========================================> ] 12.29MB/14.63MB 219d845251ba Extracting [=> ] 3.342MB/108.2MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 384497dbce3b Extracting [========> ] 11.14MB/63.48MB fbe227156a9a Extracting [=============================================> ] 13.43MB/14.63MB 219d845251ba Extracting [===> ] 7.242MB/108.2MB eabd8714fec9 Extracting [=========================> ] 195MB/375MB fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 55f2b468da67 Extracting [========================================> ] 209.5MB/257.9MB 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 219d845251ba Extracting [=====> ] 12.26MB/108.2MB eabd8714fec9 Extracting [==========================> ] 199.4MB/375MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 219d845251ba Extracting [=======> ] 17.27MB/108.2MB eabd8714fec9 Extracting [===========================> ] 208.9MB/375MB 384497dbce3b Extracting [===========> ] 15.04MB/63.48MB 219d845251ba Extracting [=========> ] 20.61MB/108.2MB eabd8714fec9 Extracting [============================> ] 214.5MB/375MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 219d845251ba Extracting [============> ] 27.85MB/108.2MB 219d845251ba Extracting [==============> ] 30.64MB/108.2MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 5e06c6bed798 Pull complete e5d7009d9e55 Pull complete 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 787d6bee9571 Pull complete fbe227156a9a Pull complete 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 219d845251ba Extracting [================> ] 36.21MB/108.2MB eabd8714fec9 Extracting [=============================> ] 219.5MB/375MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 384497dbce3b Extracting [==============> ] 18.94MB/63.48MB 219d845251ba Extracting [==================> ] 40.67MB/108.2MB eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB 384497dbce3b Extracting [===============> ] 20.05MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 218.9MB/257.9MB 219d845251ba Extracting [===================> ] 42.34MB/108.2MB eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 384497dbce3b Extracting [=================> ] 22.28MB/63.48MB 219d845251ba Extracting [======================> ] 48.46MB/108.2MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 1ec5fb03eaee Pull complete 384497dbce3b Extracting [===================> ] 24.51MB/63.48MB 219d845251ba Extracting [=========================> ] 55.15MB/108.2MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB eabd8714fec9 Extracting [==============================> ] 227.3MB/375MB 384497dbce3b Extracting [=====================> ] 27.85MB/63.48MB 219d845251ba Extracting [=============================> ] 64.06MB/108.2MB eabd8714fec9 Extracting [==============================> ] 230.6MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 219d845251ba Extracting [===============================> ] 68.52MB/108.2MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB b56567b07821 Pull complete 13ff0988aaea Pull complete 219d845251ba Extracting [================================> ] 70.19MB/108.2MB eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 384497dbce3b Extracting [=======================> ] 30.08MB/63.48MB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 219d845251ba Extracting [===================================> ] 76.87MB/108.2MB eabd8714fec9 Extracting [===============================> ] 234.5MB/375MB 0d92cad902ba Pull complete 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB 219d845251ba Extracting [=======================================> ] 84.67MB/108.2MB eabd8714fec9 Extracting [===============================> ] 237.3MB/375MB 384497dbce3b Extracting [==========================> ] 33.98MB/63.48MB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB eabd8714fec9 Extracting [===============================> ] 239.5MB/375MB 219d845251ba Extracting [===========================================> ] 93.03MB/108.2MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 384497dbce3b Extracting [============================> ] 35.65MB/63.48MB 219d845251ba Extracting [==============================================> ] 99.71MB/108.2MB eabd8714fec9 Extracting [================================> ] 242.9MB/375MB 55f2b468da67 Extracting [=============================================> ] 233.4MB/257.9MB 219d845251ba Extracting [==============================================> ] 101.4MB/108.2MB eabd8714fec9 Extracting [================================> ] 244MB/375MB 384497dbce3b Extracting [=============================> ] 37.88MB/63.48MB 219d845251ba Extracting [==================================================>] 108.2MB/108.2MB 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 235.1MB/257.9MB eabd8714fec9 Extracting [================================> ] 246.2MB/375MB eabd8714fec9 Extracting [================================> ] 246.8MB/375MB 384497dbce3b Extracting [===============================> ] 39.55MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB d3165a332ae3 Pull complete 384497dbce3b Extracting [=================================> ] 42.34MB/63.48MB eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB dcc0c3b2850c Extracting [====> ] 6.128MB/76.12MB 384497dbce3b Extracting [==================================> ] 44.01MB/63.48MB eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB dcc0c3b2850c Extracting [============> ] 18.38MB/76.12MB f243361b999b Pull complete 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 55f2b468da67 Extracting [==============================================> ] 242.3MB/257.9MB dcc0c3b2850c Extracting [=============> ] 20.05MB/76.12MB 219d845251ba Pull complete 4b82842ab819 Pull complete eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 55f2b468da67 Extracting [===============================================> ] 243.4MB/257.9MB dcc0c3b2850c Extracting [=============> ] 20.61MB/76.12MB 384497dbce3b Extracting [===================================> ] 45.68MB/63.48MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [=============> ] 21.17MB/76.12MB eabd8714fec9 Extracting [=================================> ] 254MB/375MB 384497dbce3b Extracting [====================================> ] 46.24MB/63.48MB dcc0c3b2850c Extracting [====================> ] 30.64MB/76.12MB c124ba1a8b26 Extracting [====> ] 8.913MB/91.87MB eabd8714fec9 Extracting [==================================> ] 256.8MB/375MB 384497dbce3b Extracting [======================================> ] 49.02MB/63.48MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB dcc0c3b2850c Extracting [========================> ] 37.88MB/76.12MB c124ba1a8b26 Extracting [=========> ] 16.71MB/91.87MB eabd8714fec9 Extracting [==================================> ] 260.1MB/375MB 384497dbce3b Extracting [========================================> ] 51.25MB/63.48MB 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB dcc0c3b2850c Extracting [===============================> ] 48.46MB/76.12MB c124ba1a8b26 Extracting [============> ] 23.4MB/91.87MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 384497dbce3b Extracting [==========================================> ] 53.48MB/63.48MB dcc0c3b2850c Extracting [======================================> ] 57.93MB/76.12MB 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB c124ba1a8b26 Extracting [===============> ] 28.41MB/91.87MB eabd8714fec9 Extracting [===================================> ] 267.4MB/375MB dcc0c3b2850c Extracting [===========================================> ] 66.85MB/76.12MB c124ba1a8b26 Extracting [===================> ] 36.21MB/91.87MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB c124ba1a8b26 Extracting [==========================> ] 48.46MB/91.87MB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB c124ba1a8b26 Extracting [================================> ] 59.05MB/91.87MB 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB c124ba1a8b26 Extracting [=================================> ] 61.83MB/91.87MB c124ba1a8b26 Extracting [===================================> ] 65.18MB/91.87MB dcc0c3b2850c Pull complete eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 7abf0dc59d35 Pull complete 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB drools-pdp Pulled 7e568a0dc8fb Pull complete postgres Pulled c124ba1a8b26 Extracting [=======================================> ] 71.86MB/91.87MB 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB c124ba1a8b26 Extracting [=========================================> ] 76.32MB/91.87MB 55f2b468da67 Pull complete eb7cda286a15 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 991de477d40a Pull complete api Pulled 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB c124ba1a8b26 Extracting [=============================================> ] 83.56MB/91.87MB 82bfc142787e Extracting [=======> ] 1.278MB/8.613MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 82bfc142787e Extracting [=================================> ] 5.8MB/8.613MB c124ba1a8b26 Extracting [================================================> ] 88.57MB/91.87MB 384497dbce3b Pull complete 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 5efc16ba9cdc Pull complete policy-db-migrator Pulled c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 82bfc142787e Pull complete 055b9255fa03 Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB eabd8714fec9 Extracting [====================================> ] 275.7MB/375MB 6394804c2196 Pull complete pap Pulled eabd8714fec9 Extracting [=====================================> ] 280.8MB/375MB b176d7edde70 Pull complete 46baca71a4ef Pull complete grafana Pulled eabd8714fec9 Extracting [======================================> ] 286.3MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB b0e0ef7895f4 Extracting [===============> ] 11.4MB/37.01MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB b0e0ef7895f4 Extracting [===================================> ] 25.95MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 5cfb27c10ea5 Pull complete eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB e040ea11fa10 Pull complete 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 09d5a3f70313 Extracting [====> ] 10.03MB/109.2MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 09d5a3f70313 Extracting [=========> ] 20.05MB/109.2MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 09d5a3f70313 Extracting [=============> ] 29.52MB/109.2MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 09d5a3f70313 Extracting [===================> ] 43.45MB/109.2MB eabd8714fec9 Extracting [==========================================> ] 317MB/375MB 09d5a3f70313 Extracting [==========================> ] 57.93MB/109.2MB eabd8714fec9 Extracting [==========================================> ] 320.9MB/375MB 09d5a3f70313 Extracting [================================> ] 70.19MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 09d5a3f70313 Extracting [====================================> ] 80.77MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 09d5a3f70313 Extracting [==========================================> ] 91.91MB/109.2MB 09d5a3f70313 Extracting [===============================================> ] 103.6MB/109.2MB eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 09d5a3f70313 Extracting [================================================> ] 106.4MB/109.2MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 09d5a3f70313 Pull complete eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [================================================> ] 364.3MB/375MB eabd8714fec9 Extracting [=================================================> ] 369.3MB/375MB eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB eabd8714fec9 Extracting [=================================================> ] 374.9MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 356f5c2c843b Pull complete eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Pull complete kafka Pulled 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [=================================> ] 5.8MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [===========> ] 10.09MB/44.41MB f3a82e9f1761 Extracting [===========================> ] 24.31MB/44.41MB f3a82e9f1761 Extracting [============================================> ] 39.45MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 13.93MB/127.4MB da3ed5db7103 Extracting [===========> ] 28.97MB/127.4MB da3ed5db7103 Extracting [=================> ] 44.01MB/127.4MB da3ed5db7103 Extracting [========================> ] 61.28MB/127.4MB da3ed5db7103 Extracting [==============================> ] 78.54MB/127.4MB da3ed5db7103 Extracting [=====================================> ] 96.37MB/127.4MB da3ed5db7103 Extracting [===========================================> ] 111.4MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 120.3MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 125.3MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container prometheus Creating Container postgres Creating Container prometheus Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container kafka Created Container policy-db-migrator Created Container policy-api Creating Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container postgres Starting Container prometheus Starting Container zookeeper Starting Container prometheus Started Container grafana Starting Container postgres Started Container policy-db-migrator Starting Container zookeeper Started Container kafka Starting Container kafka Started Container policy-db-migrator Started Container policy-api Starting Container grafana Started Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for drools-pdp to start... Checking if REST port 30216 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:ef80ddc48f5aa9632df0aef598c68d108a1a34fe0a4c99faf68353cbdc672f26 top - 07:48:08 up 4 min, 0 users, load average: 2.08, 1.60, 0.68 Tasks: 230 total, 1 running, 151 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.0 us, 4.0 sy, 0.0 ni, 76.2 id, 4.6 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.8G 20G 27M 7.7G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS b4543dc331e2 policy-drools-pdp 0.69% 291.7MiB / 31.41GiB 0.91% 32.2kB / 41kB 0B / 8.19kB 54 6ff6666cfaa4 policy-pap 6.75% 554.4MiB / 31.41GiB 1.72% 82.5kB / 124kB 0B / 139MB 67 339f1f134234 policy-api 0.45% 511MiB / 31.41GiB 1.59% 1.14MB / 986kB 0B / 4.1kB 59 6bc55a8268ce kafka 5.12% 396.4MiB / 31.41GiB 1.23% 153kB / 137kB 0B / 594kB 83 05e17f0c3f62 grafana 0.29% 108.7MiB / 31.41GiB 0.34% 19.1MB / 160kB 0B / 30.9MB 20 fa21fac87524 zookeeper 0.07% 86.11MiB / 31.41GiB 0.27% 51.2kB / 44.3kB 0B / 438kB 62 27fa957146f2 prometheus 0.00% 21.41MiB / 31.41GiB 0.07% 56.4kB / 2.38kB 229kB / 0B 13 5904bf754954 postgres 0.59% 85MiB / 31.41GiB 0.26% 1.64MB / 1.71MB 0B / 157MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-14T07:46:21.364397908Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-14T07:46:21Z grafana | logger=settings t=2025-06-14T07:46:21.365294067Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-14T07:46:21.365308417Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-14T07:46:21.365314367Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-14T07:46:21.365319898Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-14T07:46:21.365326758Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-14T07:46:21.365333648Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-14T07:46:21.365337728Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-14T07:46:21.365343838Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-14T07:46:21.365419998Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-14T07:46:21.365442939Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-14T07:46:21.365448469Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-14T07:46:21.365452699Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-14T07:46:21.365467119Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-14T07:46:21.365471139Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-14T07:46:21.365475339Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-14T07:46:21.365479369Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-14T07:46:21.365490489Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-14T07:46:21.365493829Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-14T07:46:21.373164925Z level=info msg=FeatureToggles azureMonitorPrometheusExemplars=true alertRuleRestore=true alertingInsights=true prometheusUsesCombobox=true cloudWatchNewLabelParsing=true recordedQueriesMulti=true cloudWatchRoundUpEndTime=true nestedFolders=true prometheusAzureOverrideAudience=true logsPanelControls=true pinNavItems=true correlations=true promQLScope=true onPremToCloudMigrations=true formatString=true alertingUIOptimizeReducer=true alertingApiServer=true publicDashboardsScene=true annotationPermissionUpdate=true lokiLabelNamesQueryApi=true awsAsyncQueryCaching=true cloudWatchCrossAccountQuerying=true recoveryThreshold=true influxdbBackendMigration=true alertingRuleRecoverDeleted=true alertingRuleVersionHistoryRestore=true alertingRulePermanentlyDelete=true failWrongDSUID=true azureMonitorEnableUserAuth=true ssoSettingsSAML=true reportingUseRawTimeRange=true lokiQueryHints=true pluginsDetailsRightPanel=true tlsMemcached=true angularDeprecationUI=true newDashboardSharingComponent=true groupToNestedTableTransformation=true lokiQuerySplitting=true panelMonitoring=true newPDFRendering=true ssoSettingsApi=true unifiedStorageSearchPermissionFiltering=true transformationsRedesign=true dataplaneFrontendFallback=true kubernetesClientDashboardsFolders=true logsContextDatasourceUi=true grafanaconThemes=true dashboardSceneForViewers=true logsInfiniteScrolling=true lokiStructuredMetadata=true useSessionStorageForRedirection=true logRowsPopoverMenu=true alertingSimplifiedRouting=true kubernetesPlaylists=true dashboardScene=true alertingNotificationsStepMode=true externalCorePlugins=true unifiedRequestLog=true logsExploreTableVisualisation=true preinstallAutoUpdate=true alertingQueryAndExpressionsStepMode=true dashgpt=true newFiltersUI=true addFieldFromCalculationStatFunctions=true dashboardSceneSolo=true grafana | logger=sqlstore t=2025-06-14T07:46:21.373446138Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-14T07:46:21.373484788Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-14T07:46:21.375803481Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-14T07:46:21.375819151Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-14T07:46:21.376635619Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-14T07:46:21.378240354Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.603465ms grafana | logger=migrator t=2025-06-14T07:46:21.544440612Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-14T07:46:21.546454751Z level=info msg="Migration successfully executed" id="create user table" duration=2.016679ms grafana | logger=migrator t=2025-06-14T07:46:21.590994847Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-14T07:46:21.592548062Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.553145ms grafana | logger=migrator t=2025-06-14T07:46:21.596958276Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-14T07:46:21.597780394Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=821.178µs grafana | logger=migrator t=2025-06-14T07:46:21.602495559Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-14T07:46:21.60347965Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=984.201µs grafana | logger=migrator t=2025-06-14T07:46:21.61280604Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-14T07:46:21.613804621Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=998.261µs grafana | logger=migrator t=2025-06-14T07:46:21.639366261Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-14T07:46:21.644121527Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.742626ms grafana | logger=migrator t=2025-06-14T07:46:21.649990404Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-14T07:46:21.650697082Z level=info msg="Migration successfully executed" id="create user table v2" duration=706.488µs grafana | logger=migrator t=2025-06-14T07:46:21.654983174Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-14T07:46:21.655886143Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=902.739µs grafana | logger=migrator t=2025-06-14T07:46:21.658889702Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-14T07:46:21.659770811Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=880.759µs grafana | logger=migrator t=2025-06-14T07:46:21.665412886Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:21.665955071Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=545.555µs grafana | logger=migrator t=2025-06-14T07:46:21.669192672Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-14T07:46:21.66993205Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=739.058µs grafana | logger=migrator t=2025-06-14T07:46:21.673304383Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-14T07:46:21.674553425Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.248062ms grafana | logger=migrator t=2025-06-14T07:46:21.679833457Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-14T07:46:21.679859707Z level=info msg="Migration successfully executed" id="Update user table charset" duration=26.96µs grafana | logger=migrator t=2025-06-14T07:46:21.688037757Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-14T07:46:21.689491732Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.456905ms grafana | logger=migrator t=2025-06-14T07:46:21.692744484Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-14T07:46:21.692993486Z level=info msg="Migration successfully executed" id="Add missing user data" duration=249.362µs grafana | logger=migrator t=2025-06-14T07:46:21.698300468Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-14T07:46:21.699390868Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.09105ms grafana | logger=migrator t=2025-06-14T07:46:21.702246896Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-14T07:46:21.702892182Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=644.566µs grafana | logger=migrator t=2025-06-14T07:46:21.706036343Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-14T07:46:21.707172965Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.135942ms grafana | logger=migrator t=2025-06-14T07:46:21.710857201Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-14T07:46:21.721602416Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.737425ms grafana | logger=migrator t=2025-06-14T07:46:21.729980038Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-14T07:46:21.73122205Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.243392ms grafana | logger=migrator t=2025-06-14T07:46:21.735698154Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-14T07:46:21.735907316Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=209.082µs grafana | logger=migrator t=2025-06-14T07:46:21.739708133Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-14T07:46:21.7403505Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=638.007µs grafana | logger=migrator t=2025-06-14T07:46:21.743825674Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-14T07:46:21.744927414Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.10091ms grafana | logger=migrator t=2025-06-14T07:46:21.79446411Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-14T07:46:21.795021165Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=561.335µs grafana | logger=migrator t=2025-06-14T07:46:21.800426188Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-14T07:46:21.801018343Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=591.645µs grafana | logger=migrator t=2025-06-14T07:46:21.807367496Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-14T07:46:21.807865441Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=498.015µs grafana | logger=migrator t=2025-06-14T07:46:21.812565796Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-14T07:46:21.81287239Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=303.664µs grafana | logger=migrator t=2025-06-14T07:46:21.819055221Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-14T07:46:21.820241542Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.187391ms grafana | logger=migrator t=2025-06-14T07:46:21.823697965Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-14T07:46:21.824421413Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=723.318µs grafana | logger=migrator t=2025-06-14T07:46:21.829960737Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-14T07:46:21.830731155Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=774.198µs grafana | logger=migrator t=2025-06-14T07:46:21.833628963Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-14T07:46:21.83437465Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=745.477µs grafana | logger=migrator t=2025-06-14T07:46:21.837581311Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-14T07:46:21.838361349Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=779.748µs grafana | logger=migrator t=2025-06-14T07:46:21.844170516Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-14T07:46:21.844190547Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=23.441µs grafana | logger=migrator t=2025-06-14T07:46:21.854346225Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-14T07:46:21.85889561Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=4.683456ms grafana | logger=migrator t=2025-06-14T07:46:21.863258973Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-14T07:46:21.874777656Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=11.510753ms grafana | logger=migrator t=2025-06-14T07:46:21.88239534Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-14T07:46:21.88335569Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=964.52µs grafana | logger=migrator t=2025-06-14T07:46:21.886774363Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-14T07:46:21.887416949Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=642.236µs grafana | logger=migrator t=2025-06-14T07:46:21.89362221Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:21.897187665Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.564235ms grafana | logger=migrator t=2025-06-14T07:46:21.923589054Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-14T07:46:21.924732635Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.148962ms grafana | logger=migrator t=2025-06-14T07:46:21.92829408Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-14T07:46:21.928844155Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=549.735µs grafana | logger=migrator t=2025-06-14T07:46:21.93549546Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-14T07:46:21.936244068Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=747.628µs grafana | logger=migrator t=2025-06-14T07:46:21.941444198Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-14T07:46:21.942225006Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=781.068µs grafana | logger=migrator t=2025-06-14T07:46:21.947307776Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-14T07:46:21.948235785Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=931.599µs grafana | logger=migrator t=2025-06-14T07:46:21.952967382Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:21.953373705Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=406.083µs grafana | logger=migrator t=2025-06-14T07:46:21.95795841Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:21.958524656Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=562.916µs grafana | logger=migrator t=2025-06-14T07:46:21.962639606Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-14T07:46:21.963134471Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=493.565µs grafana | logger=migrator t=2025-06-14T07:46:21.966531854Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-14T07:46:21.967385713Z level=info msg="Migration successfully executed" id="create star table" duration=850.429µs grafana | logger=migrator t=2025-06-14T07:46:21.972607764Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-14T07:46:21.973133069Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=525.185µs grafana | logger=migrator t=2025-06-14T07:46:21.978985366Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-14T07:46:21.980043136Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.05691ms grafana | logger=migrator t=2025-06-14T07:46:21.98348855Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-14T07:46:21.985910664Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.421374ms grafana | logger=migrator t=2025-06-14T07:46:21.991312417Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-14T07:46:21.992833202Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.519795ms grafana | logger=migrator t=2025-06-14T07:46:22.001064623Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-14T07:46:22.002005781Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=940.088µs grafana | logger=migrator t=2025-06-14T07:46:22.005831808Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-14T07:46:22.006704727Z level=info msg="Migration successfully executed" id="create org table v1" duration=872.559µs grafana | logger=migrator t=2025-06-14T07:46:22.012675434Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-14T07:46:22.014379331Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.707537ms grafana | logger=migrator t=2025-06-14T07:46:22.020445568Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-14T07:46:22.021179805Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=734.287µs grafana | logger=migrator t=2025-06-14T07:46:22.024220503Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.02496331Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=743.857µs grafana | logger=migrator t=2025-06-14T07:46:22.05873691Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.060012623Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.271743ms grafana | logger=migrator t=2025-06-14T07:46:22.064112041Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.065334953Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.221712ms grafana | logger=migrator t=2025-06-14T07:46:22.071402371Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-14T07:46:22.071432871Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.13µs grafana | logger=migrator t=2025-06-14T07:46:22.073768753Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-14T07:46:22.073797043Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=28.8µs grafana | logger=migrator t=2025-06-14T07:46:22.077020764Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-14T07:46:22.077308217Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=286.943µs grafana | logger=migrator t=2025-06-14T07:46:22.080784469Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-14T07:46:22.082045882Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.260473ms grafana | logger=migrator t=2025-06-14T07:46:22.088044948Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-14T07:46:22.088866156Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=820.508µs grafana | logger=migrator t=2025-06-14T07:46:22.091987975Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-14T07:46:22.092750212Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=761.787µs grafana | logger=migrator t=2025-06-14T07:46:22.096537789Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-14T07:46:22.097197015Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=658.836µs grafana | logger=migrator t=2025-06-14T07:46:22.100426695Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-14T07:46:22.101278724Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=848.989µs grafana | logger=migrator t=2025-06-14T07:46:22.10720256Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-14T07:46:22.1083117Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.10782ms grafana | logger=migrator t=2025-06-14T07:46:22.11776786Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-14T07:46:22.124512144Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.745064ms grafana | logger=migrator t=2025-06-14T07:46:22.128262259Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-14T07:46:22.128996596Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=733.917µs grafana | logger=migrator t=2025-06-14T07:46:22.133170696Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-14T07:46:22.133962273Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=791.007µs grafana | logger=migrator t=2025-06-14T07:46:22.140522386Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-14T07:46:22.141345994Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=822.718µs grafana | logger=migrator t=2025-06-14T07:46:22.145809055Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:22.146460482Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=650.107µs grafana | logger=migrator t=2025-06-14T07:46:22.151363198Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-14T07:46:22.152692481Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.328153ms grafana | logger=migrator t=2025-06-14T07:46:22.15682682Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-14T07:46:22.15684783Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=21.55µs grafana | logger=migrator t=2025-06-14T07:46:22.160439055Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-14T07:46:22.162414733Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.975978ms grafana | logger=migrator t=2025-06-14T07:46:22.213558098Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-14T07:46:22.216533016Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.978368ms grafana | logger=migrator t=2025-06-14T07:46:22.22223924Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.224040007Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.801157ms grafana | logger=migrator t=2025-06-14T07:46:22.227337279Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.228071315Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=731.286µs grafana | logger=migrator t=2025-06-14T07:46:22.232708649Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.234591117Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.882068ms grafana | logger=migrator t=2025-06-14T07:46:22.238598965Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.239378243Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=778.757µs grafana | logger=migrator t=2025-06-14T07:46:22.249984943Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-14T07:46:22.250956532Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=978.069µs grafana | logger=migrator t=2025-06-14T07:46:22.253702898Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-14T07:46:22.253724718Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=22.33µs grafana | logger=migrator t=2025-06-14T07:46:22.256659576Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-14T07:46:22.256683536Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.49µs grafana | logger=migrator t=2025-06-14T07:46:22.262607702Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.266598461Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.995359ms grafana | logger=migrator t=2025-06-14T07:46:22.269563679Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.270976782Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.409273ms grafana | logger=migrator t=2025-06-14T07:46:22.273672737Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.275798738Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.125201ms grafana | logger=migrator t=2025-06-14T07:46:22.283097506Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.28561355Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.526504ms grafana | logger=migrator t=2025-06-14T07:46:22.289882051Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.290385126Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=506.245µs grafana | logger=migrator t=2025-06-14T07:46:22.295324403Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:22.296596914Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.272681ms grafana | logger=migrator t=2025-06-14T07:46:22.302622601Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-14T07:46:22.30343767Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=814.839µs grafana | logger=migrator t=2025-06-14T07:46:22.306966493Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-14T07:46:22.306997633Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=29.65µs grafana | logger=migrator t=2025-06-14T07:46:22.314841678Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-14T07:46:22.31615653Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.307722ms grafana | logger=migrator t=2025-06-14T07:46:22.348544137Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-14T07:46:22.349786158Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.241341ms grafana | logger=migrator t=2025-06-14T07:46:22.355002608Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:22.361279757Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.276169ms grafana | logger=migrator t=2025-06-14T07:46:22.369238233Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-14T07:46:22.37003643Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=796.067µs grafana | logger=migrator t=2025-06-14T07:46:22.374027169Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-14T07:46:22.374913707Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=874.638µs grafana | logger=migrator t=2025-06-14T07:46:22.378234478Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-14T07:46:22.379104347Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=869.249µs grafana | logger=migrator t=2025-06-14T07:46:22.38786102Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:22.388399665Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=537.495µs grafana | logger=migrator t=2025-06-14T07:46:22.394391741Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:22.395676574Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.287723ms grafana | logger=migrator t=2025-06-14T07:46:22.399609381Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-14T07:46:22.402049434Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.439043ms grafana | logger=migrator t=2025-06-14T07:46:22.405670208Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-14T07:46:22.406527777Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=852.708µs grafana | logger=migrator t=2025-06-14T07:46:22.413289191Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-14T07:46:22.413567553Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=278.242µs grafana | logger=migrator t=2025-06-14T07:46:22.416779563Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-14T07:46:22.417033216Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=252.772µs grafana | logger=migrator t=2025-06-14T07:46:22.420056044Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-14T07:46:22.420929243Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=872.229µs grafana | logger=migrator t=2025-06-14T07:46:22.424214694Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.428469514Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.25309ms grafana | logger=migrator t=2025-06-14T07:46:22.433731084Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.436072296Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.341452ms grafana | logger=migrator t=2025-06-14T07:46:22.43966685Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-14T07:46:22.440546929Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=880.089µs grafana | logger=migrator t=2025-06-14T07:46:22.443768989Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-14T07:46:22.446626857Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.857488ms grafana | logger=migrator t=2025-06-14T07:46:22.451532613Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-14T07:46:22.453789514Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.255921ms grafana | logger=migrator t=2025-06-14T07:46:22.488330332Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-14T07:46:22.489165389Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=836.397µs grafana | logger=migrator t=2025-06-14T07:46:22.493471941Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-14T07:46:22.4976485Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=4.174959ms grafana | logger=migrator t=2025-06-14T07:46:22.502342565Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-14T07:46:22.503307183Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=965.708µs grafana | logger=migrator t=2025-06-14T07:46:22.507969878Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-14T07:46:22.508472643Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=501.915µs grafana | logger=migrator t=2025-06-14T07:46:22.512180967Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-14T07:46:22.513593041Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.375664ms grafana | logger=migrator t=2025-06-14T07:46:22.518007253Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-14T07:46:22.519546677Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.538624ms grafana | logger=migrator t=2025-06-14T07:46:22.525545054Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-14T07:46:22.526709855Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.166921ms grafana | logger=migrator t=2025-06-14T07:46:22.530513161Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.531256338Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=742.897µs grafana | logger=migrator t=2025-06-14T07:46:22.537265095Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-14T07:46:22.538944161Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.678586ms grafana | logger=migrator t=2025-06-14T07:46:22.543502874Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-14T07:46:22.552707382Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.206078ms grafana | logger=migrator t=2025-06-14T07:46:22.567189449Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-14T07:46:22.567877575Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=687.476µs grafana | logger=migrator t=2025-06-14T07:46:22.571705931Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-14T07:46:22.572301938Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=595.627µs grafana | logger=migrator t=2025-06-14T07:46:22.575679099Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-14T07:46:22.576271685Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=591.346µs grafana | logger=migrator t=2025-06-14T07:46:22.583402583Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-14T07:46:22.583792406Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=389.053µs grafana | logger=migrator t=2025-06-14T07:46:22.586694444Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-14T07:46:22.588384719Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.689865ms grafana | logger=migrator t=2025-06-14T07:46:22.629937603Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-14T07:46:22.632397147Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.462534ms grafana | logger=migrator t=2025-06-14T07:46:22.638573315Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-14T07:46:22.638592655Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=19.37µs grafana | logger=migrator t=2025-06-14T07:46:22.643095128Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-14T07:46:22.64325233Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=156.922µs grafana | logger=migrator t=2025-06-14T07:46:22.645564222Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-14T07:46:22.647770713Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.205911ms grafana | logger=migrator t=2025-06-14T07:46:22.654478306Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-14T07:46:22.654693358Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=212.382µs grafana | logger=migrator t=2025-06-14T07:46:22.662735344Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-14T07:46:22.663040897Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=309.383µs grafana | logger=migrator t=2025-06-14T07:46:22.667224067Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-14T07:46:22.669873142Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.684605ms grafana | logger=migrator t=2025-06-14T07:46:22.675419174Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-14T07:46:22.675631837Z level=info msg="Migration successfully executed" id="Update uid value" duration=212.963µs grafana | logger=migrator t=2025-06-14T07:46:22.679913417Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:22.680788435Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=873.858µs grafana | logger=migrator t=2025-06-14T07:46:22.687258377Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-14T07:46:22.688104965Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=846.138µs grafana | logger=migrator t=2025-06-14T07:46:22.692281405Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-14T07:46:22.694840748Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.558733ms grafana | logger=migrator t=2025-06-14T07:46:22.69808558Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-14T07:46:22.700583733Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.497712ms grafana | logger=migrator t=2025-06-14T07:46:22.703291688Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-14T07:46:22.703313548Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=22.37µs grafana | logger=migrator t=2025-06-14T07:46:22.708165535Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-14T07:46:22.708907601Z level=info msg="Migration successfully executed" id="create api_key table" duration=739.496µs grafana | logger=migrator t=2025-06-14T07:46:22.711839879Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-14T07:46:22.712579467Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=738.678µs grafana | logger=migrator t=2025-06-14T07:46:22.717033369Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-14T07:46:22.717830747Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=796.598µs grafana | logger=migrator t=2025-06-14T07:46:22.722968395Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-14T07:46:22.723775583Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=806.878µs grafana | logger=migrator t=2025-06-14T07:46:22.726801092Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.727565318Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=763.626µs grafana | logger=migrator t=2025-06-14T07:46:22.731476996Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-14T07:46:22.732269923Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=791.558µs grafana | logger=migrator t=2025-06-14T07:46:22.756768686Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-14T07:46:22.757587563Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=810.257µs grafana | logger=migrator t=2025-06-14T07:46:22.765040113Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-14T07:46:22.775939307Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.899604ms grafana | logger=migrator t=2025-06-14T07:46:22.782445149Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-14T07:46:22.783011154Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=566.095µs grafana | logger=migrator t=2025-06-14T07:46:22.787449046Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-14T07:46:22.788016772Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=566.736µs grafana | logger=migrator t=2025-06-14T07:46:22.794266551Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-14T07:46:22.794806726Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=537.255µs grafana | logger=migrator t=2025-06-14T07:46:22.798220268Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-14T07:46:22.798778354Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=557.746µs grafana | logger=migrator t=2025-06-14T07:46:22.802052645Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:22.802287237Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=230.742µs grafana | logger=migrator t=2025-06-14T07:46:22.811632125Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-14T07:46:22.812034399Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=401.934µs grafana | logger=migrator t=2025-06-14T07:46:22.815566153Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-14T07:46:22.815587853Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=22.26µs grafana | logger=migrator t=2025-06-14T07:46:22.818991395Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-14T07:46:22.820878252Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.886597ms grafana | logger=migrator t=2025-06-14T07:46:22.828353133Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-14T07:46:22.832415943Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.062299ms grafana | logger=migrator t=2025-06-14T07:46:22.836114777Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-14T07:46:22.836243538Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=130.761µs grafana | logger=migrator t=2025-06-14T07:46:22.839246787Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-14T07:46:22.841922882Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.675145ms grafana | logger=migrator t=2025-06-14T07:46:22.845613397Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-14T07:46:22.848890098Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.279811ms grafana | logger=migrator t=2025-06-14T07:46:22.855119407Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-14T07:46:22.855720543Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=604.386µs grafana | logger=migrator t=2025-06-14T07:46:22.864679018Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-14T07:46:22.865651107Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=968.949µs grafana | logger=migrator t=2025-06-14T07:46:22.906898878Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-14T07:46:22.908449543Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.554475ms grafana | logger=migrator t=2025-06-14T07:46:22.914650251Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-14T07:46:22.915682101Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.03215ms grafana | logger=migrator t=2025-06-14T07:46:22.922312294Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-14T07:46:22.923223873Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=909.419µs grafana | logger=migrator t=2025-06-14T07:46:22.927546503Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-14T07:46:22.928431942Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=881.659µs grafana | logger=migrator t=2025-06-14T07:46:22.932210207Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-14T07:46:22.932229958Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=20.511µs grafana | logger=migrator t=2025-06-14T07:46:22.941461255Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-14T07:46:22.941494725Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=34.46µs grafana | logger=migrator t=2025-06-14T07:46:22.945164451Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-14T07:46:22.948145089Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.981038ms grafana | logger=migrator t=2025-06-14T07:46:22.952350788Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-14T07:46:22.957022643Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.667045ms grafana | logger=migrator t=2025-06-14T07:46:22.962292483Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-14T07:46:22.962360584Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=69.061µs grafana | logger=migrator t=2025-06-14T07:46:22.966184989Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-14T07:46:22.966964527Z level=info msg="Migration successfully executed" id="create quota table v1" duration=783.358µs grafana | logger=migrator t=2025-06-14T07:46:22.971222077Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-14T07:46:22.972091175Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=868.328µs grafana | logger=migrator t=2025-06-14T07:46:22.978953991Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-14T07:46:22.979083642Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=133.121µs grafana | logger=migrator t=2025-06-14T07:46:22.984554484Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-14T07:46:22.985691004Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.14046ms grafana | logger=migrator t=2025-06-14T07:46:22.990830763Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-14T07:46:22.991900863Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.06973ms grafana | logger=migrator t=2025-06-14T07:46:22.995119114Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-14T07:46:22.998366215Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.246551ms grafana | logger=migrator t=2025-06-14T07:46:23.005385781Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-14T07:46:23.005444031Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=58.55µs grafana | logger=migrator t=2025-06-14T07:46:23.008933918Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-14T07:46:23.009476824Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=543.766µs grafana | logger=migrator t=2025-06-14T07:46:23.01321097Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-14T07:46:23.02124213Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=8.03019ms grafana | logger=migrator t=2025-06-14T07:46:23.050317548Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-14T07:46:23.05145512Z level=info msg="Migration successfully executed" id="create session table" duration=1.138412ms grafana | logger=migrator t=2025-06-14T07:46:23.055755373Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-14T07:46:23.055947065Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=190.971µs grafana | logger=migrator t=2025-06-14T07:46:23.059271988Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-14T07:46:23.05946857Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=198.412µs grafana | logger=migrator t=2025-06-14T07:46:23.062400119Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-14T07:46:23.063146227Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=745.558µs grafana | logger=migrator t=2025-06-14T07:46:23.068554691Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-14T07:46:23.069342358Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=786.987µs grafana | logger=migrator t=2025-06-14T07:46:23.07259513Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-14T07:46:23.072622831Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.541µs grafana | logger=migrator t=2025-06-14T07:46:23.075505809Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-14T07:46:23.07553232Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.511µs grafana | logger=migrator t=2025-06-14T07:46:23.083975844Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-14T07:46:23.088725462Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.781708ms grafana | logger=migrator t=2025-06-14T07:46:23.09156681Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-14T07:46:23.094773382Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.205822ms grafana | logger=migrator t=2025-06-14T07:46:23.097731371Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-14T07:46:23.097853073Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=121.082µs grafana | logger=migrator t=2025-06-14T07:46:23.100715211Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-14T07:46:23.100830512Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=114.341µs grafana | logger=migrator t=2025-06-14T07:46:23.107473559Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-14T07:46:23.108397848Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=923.059µs grafana | logger=migrator t=2025-06-14T07:46:23.111431068Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-14T07:46:23.111484819Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=53.511µs grafana | logger=migrator t=2025-06-14T07:46:23.114266936Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-14T07:46:23.117999533Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.731297ms grafana | logger=migrator t=2025-06-14T07:46:23.123735131Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-14T07:46:23.123986004Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=250.093µs grafana | logger=migrator t=2025-06-14T07:46:23.127239976Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-14T07:46:23.130566509Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.325603ms grafana | logger=migrator t=2025-06-14T07:46:23.13362294Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-14T07:46:23.136872372Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.252242ms grafana | logger=migrator t=2025-06-14T07:46:23.142283366Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-14T07:46:23.142343266Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=60.38µs grafana | logger=migrator t=2025-06-14T07:46:23.14567839Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-14T07:46:23.14667992Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.0013ms grafana | logger=migrator t=2025-06-14T07:46:23.149922582Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-14T07:46:23.150846191Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=923.359µs grafana | logger=migrator t=2025-06-14T07:46:23.157093533Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-14T07:46:23.158152855Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.057952ms grafana | logger=migrator t=2025-06-14T07:46:23.171225574Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-14T07:46:23.172367626Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.139062ms grafana | logger=migrator t=2025-06-14T07:46:23.176602908Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-14T07:46:23.177583428Z level=info msg="Migration successfully executed" id="add index alert state" duration=979.39µs grafana | logger=migrator t=2025-06-14T07:46:23.187953881Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-14T07:46:23.189625318Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.674657ms grafana | logger=migrator t=2025-06-14T07:46:23.193097053Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-14T07:46:23.193892791Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=796.658µs grafana | logger=migrator t=2025-06-14T07:46:23.199515957Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-14T07:46:23.200355575Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=839.298µs grafana | logger=migrator t=2025-06-14T07:46:23.205237424Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-14T07:46:23.206067342Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=828.698µs grafana | logger=migrator t=2025-06-14T07:46:23.209721909Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-14T07:46:23.221383075Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.665537ms grafana | logger=migrator t=2025-06-14T07:46:23.224989281Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-14T07:46:23.225570737Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=580.346µs grafana | logger=migrator t=2025-06-14T07:46:23.231838829Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-14T07:46:23.232689907Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=850.388µs grafana | logger=migrator t=2025-06-14T07:46:23.238635018Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:23.2389156Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=280.312µs grafana | logger=migrator t=2025-06-14T07:46:23.241839609Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-14T07:46:23.242597537Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=756.828µs grafana | logger=migrator t=2025-06-14T07:46:23.250442125Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-14T07:46:23.251580236Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.137071ms grafana | logger=migrator t=2025-06-14T07:46:23.256391694Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-14T07:46:23.260781348Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.387074ms grafana | logger=migrator t=2025-06-14T07:46:23.264324124Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-14T07:46:23.26800322Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.667286ms grafana | logger=migrator t=2025-06-14T07:46:23.272891489Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-14T07:46:23.276930269Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.03783ms grafana | logger=migrator t=2025-06-14T07:46:23.312880628Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-14T07:46:23.319351982Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.471974ms grafana | logger=migrator t=2025-06-14T07:46:23.322882118Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-14T07:46:23.323697256Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=814.608µs grafana | logger=migrator t=2025-06-14T07:46:23.326861107Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-14T07:46:23.326887017Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.68µs grafana | logger=migrator t=2025-06-14T07:46:23.344123169Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-14T07:46:23.344146969Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=27.09µs grafana | logger=migrator t=2025-06-14T07:46:23.350051258Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-14T07:46:23.350640545Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=589.517µs grafana | logger=migrator t=2025-06-14T07:46:23.354269831Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-14T07:46:23.355665194Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.392943ms grafana | logger=migrator t=2025-06-14T07:46:23.36128127Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-14T07:46:23.362565234Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.283754ms grafana | logger=migrator t=2025-06-14T07:46:23.370884037Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-14T07:46:23.371667214Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=786.317µs grafana | logger=migrator t=2025-06-14T07:46:23.380759855Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-14T07:46:23.382495092Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.737938ms grafana | logger=migrator t=2025-06-14T07:46:23.389099278Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-14T07:46:23.393193859Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.097521ms grafana | logger=migrator t=2025-06-14T07:46:23.396485622Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-14T07:46:23.400549972Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.06371ms grafana | logger=migrator t=2025-06-14T07:46:23.403747874Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-14T07:46:23.404078908Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=328.384µs grafana | logger=migrator t=2025-06-14T07:46:23.410443241Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:23.411618332Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.175201ms grafana | logger=migrator t=2025-06-14T07:46:23.41525251Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-14T07:46:23.416207219Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=953.559µs grafana | logger=migrator t=2025-06-14T07:46:23.461742563Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-14T07:46:23.468357549Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.616266ms grafana | logger=migrator t=2025-06-14T07:46:23.473460209Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-14T07:46:23.473478279Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=16.18µs grafana | logger=migrator t=2025-06-14T07:46:23.476678102Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-14T07:46:23.478598271Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.919469ms grafana | logger=migrator t=2025-06-14T07:46:23.482312749Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-14T07:46:23.483829423Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.511514ms grafana | logger=migrator t=2025-06-14T07:46:23.489800723Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-14T07:46:23.489991664Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=190.532µs grafana | logger=migrator t=2025-06-14T07:46:23.495887163Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-14T07:46:23.497697741Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.809308ms grafana | logger=migrator t=2025-06-14T07:46:23.501341997Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-14T07:46:23.502393589Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.051292ms grafana | logger=migrator t=2025-06-14T07:46:23.505882953Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-14T07:46:23.506951463Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.06845ms grafana | logger=migrator t=2025-06-14T07:46:23.513343308Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-14T07:46:23.514300097Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=956.399µs grafana | logger=migrator t=2025-06-14T07:46:23.518915424Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-14T07:46:23.520530119Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.609956ms grafana | logger=migrator t=2025-06-14T07:46:23.527648281Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-14T07:46:23.528781121Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.13247ms grafana | logger=migrator t=2025-06-14T07:46:23.532040144Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-14T07:46:23.532063884Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.44µs grafana | logger=migrator t=2025-06-14T07:46:23.535360917Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.539906262Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.544645ms grafana | logger=migrator t=2025-06-14T07:46:23.543017213Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-14T07:46:23.544021094Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.005101ms grafana | logger=migrator t=2025-06-14T07:46:23.55067861Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.555040413Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.360993ms grafana | logger=migrator t=2025-06-14T07:46:23.558760831Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-14T07:46:23.559657739Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=898.698µs grafana | logger=migrator t=2025-06-14T07:46:23.562895421Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-14T07:46:23.563877632Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=981.301µs grafana | logger=migrator t=2025-06-14T07:46:23.607008172Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-14T07:46:23.608882071Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.873899ms grafana | logger=migrator t=2025-06-14T07:46:23.612909411Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-14T07:46:23.623706418Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.797647ms grafana | logger=migrator t=2025-06-14T07:46:23.627087112Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-14T07:46:23.627677578Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=589.676µs grafana | logger=migrator t=2025-06-14T07:46:23.633733299Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-14T07:46:23.635834009Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=2.09993ms grafana | logger=migrator t=2025-06-14T07:46:23.641003301Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-14T07:46:23.641551886Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=545.225µs grafana | logger=migrator t=2025-06-14T07:46:23.644944831Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-14T07:46:23.645643588Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=698.327µs grafana | logger=migrator t=2025-06-14T07:46:23.651029811Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-14T07:46:23.651505216Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=474.755µs grafana | logger=migrator t=2025-06-14T07:46:23.655248043Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.660578886Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.331003ms grafana | logger=migrator t=2025-06-14T07:46:23.663749388Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.667651117Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.899459ms grafana | logger=migrator t=2025-06-14T07:46:23.674340794Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.675338673Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=996.869µs grafana | logger=migrator t=2025-06-14T07:46:23.678693527Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.679659607Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=965.18µs grafana | logger=migrator t=2025-06-14T07:46:23.683423714Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-14T07:46:23.683724097Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=299.783µs grafana | logger=migrator t=2025-06-14T07:46:23.690623896Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-14T07:46:23.696742487Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.115021ms grafana | logger=migrator t=2025-06-14T07:46:23.701999579Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-14T07:46:23.703039529Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.03958ms grafana | logger=migrator t=2025-06-14T07:46:23.707756987Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-14T07:46:23.70802721Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=269.513µs grafana | logger=migrator t=2025-06-14T07:46:23.743660125Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-14T07:46:23.744475003Z level=info msg="Migration successfully executed" id="Move region to single row" duration=816.748µs grafana | logger=migrator t=2025-06-14T07:46:23.748766087Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.749881347Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.115341ms grafana | logger=migrator t=2025-06-14T07:46:23.756930528Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.75813833Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.208202ms grafana | logger=migrator t=2025-06-14T07:46:23.762508874Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.763597674Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.08829ms grafana | logger=migrator t=2025-06-14T07:46:23.768410912Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.76919497Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=783.378µs grafana | logger=migrator t=2025-06-14T07:46:23.774495013Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.776459273Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.95724ms grafana | logger=migrator t=2025-06-14T07:46:23.781707894Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-14T07:46:23.782929137Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.221333ms grafana | logger=migrator t=2025-06-14T07:46:23.786657384Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-14T07:46:23.786681344Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=25.36µs grafana | logger=migrator t=2025-06-14T07:46:23.79227986Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-14T07:46:23.79230357Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=25.2µs grafana | logger=migrator t=2025-06-14T07:46:23.796016798Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-14T07:46:23.796035388Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=18.93µs grafana | logger=migrator t=2025-06-14T07:46:23.800197769Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-14T07:46:23.801436001Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.229122ms grafana | logger=migrator t=2025-06-14T07:46:23.806416021Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-14T07:46:23.807719534Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.303943ms grafana | logger=migrator t=2025-06-14T07:46:23.812777824Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-14T07:46:23.813920006Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.141022ms grafana | logger=migrator t=2025-06-14T07:46:23.817637253Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-14T07:46:23.819012717Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.375404ms grafana | logger=migrator t=2025-06-14T07:46:23.822736714Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-14T07:46:23.823035807Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=298.713µs grafana | logger=migrator t=2025-06-14T07:46:23.827581343Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-14T07:46:23.828193419Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=611.916µs grafana | logger=migrator t=2025-06-14T07:46:23.832716494Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-14T07:46:23.832739154Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=22.67µs grafana | logger=migrator t=2025-06-14T07:46:23.836485591Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-14T07:46:23.841268149Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.781368ms grafana | logger=migrator t=2025-06-14T07:46:23.847686063Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-14T07:46:23.848548641Z level=info msg="Migration successfully executed" id="create team table" duration=861.838µs grafana | logger=migrator t=2025-06-14T07:46:23.877870334Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-14T07:46:23.879034915Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.174311ms grafana | logger=migrator t=2025-06-14T07:46:23.883164307Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-14T07:46:23.884186707Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.0219ms grafana | logger=migrator t=2025-06-14T07:46:23.889101066Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-14T07:46:23.89445639Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.353764ms grafana | logger=migrator t=2025-06-14T07:46:23.897712962Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-14T07:46:23.898010135Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=296.793µs grafana | logger=migrator t=2025-06-14T07:46:23.901227517Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:23.902743143Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.515376ms grafana | logger=migrator t=2025-06-14T07:46:23.908412069Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-14T07:46:23.913009474Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.596565ms grafana | logger=migrator t=2025-06-14T07:46:23.915837523Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-14T07:46:23.920342358Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.507056ms grafana | logger=migrator t=2025-06-14T07:46:23.924042264Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-14T07:46:23.924852963Z level=info msg="Migration successfully executed" id="create team member table" duration=810.569µs grafana | logger=migrator t=2025-06-14T07:46:23.928511869Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-14T07:46:23.929903223Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.385914ms grafana | logger=migrator t=2025-06-14T07:46:23.93361416Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-14T07:46:23.935000374Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.385274ms grafana | logger=migrator t=2025-06-14T07:46:23.942718701Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-14T07:46:23.94455072Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.831299ms grafana | logger=migrator t=2025-06-14T07:46:23.95059474Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-14T07:46:23.956844732Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.248972ms grafana | logger=migrator t=2025-06-14T07:46:23.962330397Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-14T07:46:23.967036024Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.704447ms grafana | logger=migrator t=2025-06-14T07:46:23.969963893Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-14T07:46:23.974579208Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.614685ms grafana | logger=migrator t=2025-06-14T07:46:23.977841451Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-14T07:46:23.978866942Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.025081ms grafana | logger=migrator t=2025-06-14T07:46:24.021975896Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-14T07:46:24.022876854Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=901.078µs grafana | logger=migrator t=2025-06-14T07:46:24.026265317Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-14T07:46:24.027155715Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=890.148µs grafana | logger=migrator t=2025-06-14T07:46:24.030576377Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-14T07:46:24.031446286Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=869.349µs grafana | logger=migrator t=2025-06-14T07:46:24.036490644Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-14T07:46:24.037352003Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=864.049µs grafana | logger=migrator t=2025-06-14T07:46:24.040419752Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-14T07:46:24.041329151Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=909.009µs grafana | logger=migrator t=2025-06-14T07:46:24.0444453Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-14T07:46:24.04542933Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=984.03µs grafana | logger=migrator t=2025-06-14T07:46:24.052171114Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-14T07:46:24.053081093Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=909.019µs grafana | logger=migrator t=2025-06-14T07:46:24.056297483Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-14T07:46:24.057894309Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.600116ms grafana | logger=migrator t=2025-06-14T07:46:24.061365962Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-14T07:46:24.061872527Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=505.895µs grafana | logger=migrator t=2025-06-14T07:46:24.066972696Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-14T07:46:24.067249978Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=276.302µs grafana | logger=migrator t=2025-06-14T07:46:24.074577328Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-14T07:46:24.075744439Z level=info msg="Migration successfully executed" id="create tag table" duration=1.170591ms grafana | logger=migrator t=2025-06-14T07:46:24.078867299Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-14T07:46:24.080008981Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.141112ms grafana | logger=migrator t=2025-06-14T07:46:24.083198661Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-14T07:46:24.084014218Z level=info msg="Migration successfully executed" id="create login attempt table" duration=814.697µs grafana | logger=migrator t=2025-06-14T07:46:24.089794674Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-14T07:46:24.091523931Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.728377ms grafana | logger=migrator t=2025-06-14T07:46:24.097808871Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-14T07:46:24.099055113Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.248493ms grafana | logger=migrator t=2025-06-14T07:46:24.106408502Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:24.117357857Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.945345ms grafana | logger=migrator t=2025-06-14T07:46:24.120923291Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-14T07:46:24.121615049Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=692.628µs grafana | logger=migrator t=2025-06-14T07:46:24.154701465Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-14T07:46:24.15634998Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.647365ms grafana | logger=migrator t=2025-06-14T07:46:24.164415788Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:24.164961253Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=543.405µs grafana | logger=migrator t=2025-06-14T07:46:24.170843069Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:24.171481316Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=637.627µs grafana | logger=migrator t=2025-06-14T07:46:24.175453043Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-14T07:46:24.176687554Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.233971ms grafana | logger=migrator t=2025-06-14T07:46:24.183219168Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-14T07:46:24.184153236Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=933.398µs grafana | logger=migrator t=2025-06-14T07:46:24.190770029Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-14T07:46:24.190798899Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=29.01µs grafana | logger=migrator t=2025-06-14T07:46:24.194808788Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.202856786Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.049308ms grafana | logger=migrator t=2025-06-14T07:46:24.208699231Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.212440037Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.741686ms grafana | logger=migrator t=2025-06-14T07:46:24.216369275Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.220352273Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.059619ms grafana | logger=migrator t=2025-06-14T07:46:24.223577923Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.227682643Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.10394ms grafana | logger=migrator t=2025-06-14T07:46:24.232078534Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.233010574Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=931.2µs grafana | logger=migrator t=2025-06-14T07:46:24.236458467Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.241596116Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.136419ms grafana | logger=migrator t=2025-06-14T07:46:24.245245421Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-14T07:46:24.250815414Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.568573ms grafana | logger=migrator t=2025-06-14T07:46:24.259133464Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-14T07:46:24.260400445Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.266391ms grafana | logger=migrator t=2025-06-14T07:46:24.265811997Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-14T07:46:24.266902108Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.086591ms grafana | logger=migrator t=2025-06-14T07:46:24.289583815Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-14T07:46:24.290729865Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.14909ms grafana | logger=migrator t=2025-06-14T07:46:24.297311618Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-14T07:46:24.298144387Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=831.799µs grafana | logger=migrator t=2025-06-14T07:46:24.30161977Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-14T07:46:24.302693449Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.072659ms grafana | logger=migrator t=2025-06-14T07:46:24.309354144Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-14T07:46:24.31105146Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.696616ms grafana | logger=migrator t=2025-06-14T07:46:24.316441171Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-14T07:46:24.322115885Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.673654ms grafana | logger=migrator t=2025-06-14T07:46:24.325799141Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-14T07:46:24.326866691Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.07002ms grafana | logger=migrator t=2025-06-14T07:46:24.330524016Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-14T07:46:24.339267249Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.742793ms grafana | logger=migrator t=2025-06-14T07:46:24.345313788Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-14T07:46:24.345964784Z level=info msg="Migration successfully executed" id="create cache_data table" duration=649.876µs grafana | logger=migrator t=2025-06-14T07:46:24.34981835Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-14T07:46:24.351481047Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.662437ms grafana | logger=migrator t=2025-06-14T07:46:24.355582665Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-14T07:46:24.356618186Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.038481ms grafana | logger=migrator t=2025-06-14T07:46:24.361604783Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-14T07:46:24.362737214Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.132681ms grafana | logger=migrator t=2025-06-14T07:46:24.367757113Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-14T07:46:24.367780233Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=23.58µs grafana | logger=migrator t=2025-06-14T07:46:24.373050032Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-14T07:46:24.373290295Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=239.733µs grafana | logger=migrator t=2025-06-14T07:46:24.378207582Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-14T07:46:24.379961359Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.751777ms grafana | logger=migrator t=2025-06-14T07:46:24.383341781Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T07:46:24.384390872Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.048681ms grafana | logger=migrator t=2025-06-14T07:46:24.387718503Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T07:46:24.388974805Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.254872ms grafana | logger=migrator t=2025-06-14T07:46:24.393317877Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T07:46:24.393337357Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=20.06µs grafana | logger=migrator t=2025-06-14T07:46:24.397743809Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T07:46:24.399033312Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.288503ms grafana | logger=migrator t=2025-06-14T07:46:24.445549586Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T07:46:24.447115061Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.564926ms grafana | logger=migrator t=2025-06-14T07:46:24.452717155Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T07:46:24.453848355Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.13129ms grafana | logger=migrator t=2025-06-14T07:46:24.457614672Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T07:46:24.45856858Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=952.838µs grafana | logger=migrator t=2025-06-14T07:46:24.462311966Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-14T07:46:24.468103361Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.790565ms grafana | logger=migrator t=2025-06-14T07:46:24.47411631Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-14T07:46:24.475083269Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=966.639µs grafana | logger=migrator t=2025-06-14T07:46:24.478871234Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-14T07:46:24.478974375Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=101.711µs grafana | logger=migrator t=2025-06-14T07:46:24.48255317Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-14T07:46:24.483452268Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=898.818µs grafana | logger=migrator t=2025-06-14T07:46:24.488566678Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-14T07:46:24.490229063Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.663325ms grafana | logger=migrator t=2025-06-14T07:46:24.494619695Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-14T07:46:24.495785386Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.165491ms grafana | logger=migrator t=2025-06-14T07:46:24.501840234Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T07:46:24.501860434Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=21.15µs grafana | logger=migrator t=2025-06-14T07:46:24.506561679Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-14T07:46:24.508030664Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.467285ms grafana | logger=migrator t=2025-06-14T07:46:24.51186997Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-14T07:46:24.513292354Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.421994ms grafana | logger=migrator t=2025-06-14T07:46:24.517817117Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-14T07:46:24.518796727Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=979.07µs grafana | logger=migrator t=2025-06-14T07:46:24.524526402Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-14T07:46:24.526096606Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.569374ms grafana | logger=migrator t=2025-06-14T07:46:24.530791371Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.537246453Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.454442ms grafana | logger=migrator t=2025-06-14T07:46:24.542025389Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.542697645Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=672.346µs grafana | logger=migrator t=2025-06-14T07:46:24.546082087Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.546721563Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=637.426µs grafana | logger=migrator t=2025-06-14T07:46:24.590756724Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.612733385Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.979181ms grafana | logger=migrator t=2025-06-14T07:46:24.617806433Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.641671411Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.860858ms grafana | logger=migrator t=2025-06-14T07:46:24.646878401Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.647614229Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=735.808µs grafana | logger=migrator t=2025-06-14T07:46:24.653458715Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.655461393Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=2.079729ms grafana | logger=migrator t=2025-06-14T07:46:24.659523602Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-14T07:46:24.666760701Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.237669ms grafana | logger=migrator t=2025-06-14T07:46:24.670698379Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-14T07:46:24.676746457Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.021368ms grafana | logger=migrator t=2025-06-14T07:46:24.682688654Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:24.683782054Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.09195ms grafana | logger=migrator t=2025-06-14T07:46:24.686729552Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-14T07:46:24.688463169Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.732537ms grafana | logger=migrator t=2025-06-14T07:46:24.692986612Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-14T07:46:24.694679228Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.693006ms grafana | logger=migrator t=2025-06-14T07:46:24.753796643Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-14T07:46:24.7554615Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.663367ms grafana | logger=migrator t=2025-06-14T07:46:24.760362537Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T07:46:24.760390757Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=29.21µs grafana | logger=migrator t=2025-06-14T07:46:24.765334254Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.771530624Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.196649ms grafana | logger=migrator t=2025-06-14T07:46:24.774879205Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.781513018Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.635913ms grafana | logger=migrator t=2025-06-14T07:46:24.787176052Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.792340272Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.16308ms grafana | logger=migrator t=2025-06-14T07:46:24.795628803Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-14T07:46:24.796591873Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=962.45µs grafana | logger=migrator t=2025-06-14T07:46:24.800128417Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-14T07:46:24.801266417Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.13719ms grafana | logger=migrator t=2025-06-14T07:46:24.807578018Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.816464593Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.886845ms grafana | logger=migrator t=2025-06-14T07:46:24.819489032Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.824057785Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.567573ms grafana | logger=migrator t=2025-06-14T07:46:24.834069791Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-14T07:46:24.83597152Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.894839ms grafana | logger=migrator t=2025-06-14T07:46:24.841107009Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:24.850258166Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.151237ms grafana | logger=migrator t=2025-06-14T07:46:24.853597488Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:24.859843647Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.245279ms grafana | logger=migrator t=2025-06-14T07:46:24.891222937Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:24.891278228Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=53.021µs grafana | logger=migrator t=2025-06-14T07:46:24.897623739Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:24.899022972Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.399113ms grafana | logger=migrator t=2025-06-14T07:46:24.903013321Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-14T07:46:24.904083191Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.06589ms grafana | logger=migrator t=2025-06-14T07:46:24.907547804Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-14T07:46:24.908562014Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.014ms grafana | logger=migrator t=2025-06-14T07:46:24.91352145Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T07:46:24.913539521Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=18.781µs grafana | logger=migrator t=2025-06-14T07:46:24.918494619Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:24.925108802Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.608773ms grafana | logger=migrator t=2025-06-14T07:46:24.928359213Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:24.935285949Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.926266ms grafana | logger=migrator t=2025-06-14T07:46:24.939363629Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:24.943913351Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.548932ms grafana | logger=migrator t=2025-06-14T07:46:24.947659268Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:24.953909468Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.24895ms grafana | logger=migrator t=2025-06-14T07:46:24.957069838Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-14T07:46:24.963400948Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.308849ms grafana | logger=migrator t=2025-06-14T07:46:24.969115912Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:24.969140282Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=24.28µs grafana | logger=migrator t=2025-06-14T07:46:24.973120601Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-14T07:46:24.973984909Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=863.858µs grafana | logger=migrator t=2025-06-14T07:46:24.979836325Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-14T07:46:24.986858573Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.021398ms grafana | logger=migrator t=2025-06-14T07:46:24.990443487Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-14T07:46:24.990516908Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=78.981µs grafana | logger=migrator t=2025-06-14T07:46:24.994199382Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-14T07:46:25.001923766Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.724014ms grafana | logger=migrator t=2025-06-14T07:46:25.017323813Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-14T07:46:25.018524075Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.200782ms grafana | logger=migrator t=2025-06-14T07:46:25.021931237Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-14T07:46:25.028585571Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.653554ms grafana | logger=migrator t=2025-06-14T07:46:25.031790162Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-14T07:46:25.032910752Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.11967ms grafana | logger=migrator t=2025-06-14T07:46:25.037335435Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-14T07:46:25.038480165Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.13819ms grafana | logger=migrator t=2025-06-14T07:46:25.042822207Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-14T07:46:25.049820174Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.996717ms grafana | logger=migrator t=2025-06-14T07:46:25.05361149Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-14T07:46:25.054526309Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=913.619µs grafana | logger=migrator t=2025-06-14T07:46:25.058592867Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-14T07:46:25.059750409Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.156772ms grafana | logger=migrator t=2025-06-14T07:46:25.062823658Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-14T07:46:25.063701356Z level=info msg="Migration successfully executed" id="create alert_image table" duration=877.208µs grafana | logger=migrator t=2025-06-14T07:46:25.068600513Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-14T07:46:25.070004296Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.407353ms grafana | logger=migrator t=2025-06-14T07:46:25.073544401Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-14T07:46:25.073564721Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=21.26µs grafana | logger=migrator t=2025-06-14T07:46:25.077146845Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-14T07:46:25.078225715Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.07776ms grafana | logger=migrator t=2025-06-14T07:46:25.083922319Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-14T07:46:25.08498481Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.062541ms grafana | logger=migrator t=2025-06-14T07:46:25.089116989Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-14T07:46:25.089614083Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-14T07:46:25.093882075Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-14T07:46:25.094537041Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=653.736µs grafana | logger=migrator t=2025-06-14T07:46:25.101574978Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-14T07:46:25.102756719Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.180191ms grafana | logger=migrator t=2025-06-14T07:46:25.10604155Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-14T07:46:25.114051297Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.002637ms grafana | logger=migrator t=2025-06-14T07:46:25.119526649Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-14T07:46:25.121529689Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=2.00313ms grafana | logger=migrator t=2025-06-14T07:46:25.156802075Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-14T07:46:25.158513582Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.712326ms grafana | logger=migrator t=2025-06-14T07:46:25.168128843Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-14T07:46:25.169033132Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=903.409µs grafana | logger=migrator t=2025-06-14T07:46:25.17199075Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-14T07:46:25.17306601Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.07459ms grafana | logger=migrator t=2025-06-14T07:46:25.176605194Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:25.177676554Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.07045ms grafana | logger=migrator t=2025-06-14T07:46:25.181813104Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-14T07:46:25.181974295Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=162.721µs grafana | logger=migrator t=2025-06-14T07:46:25.188848651Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-14T07:46:25.188909282Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=61.451µs grafana | logger=migrator t=2025-06-14T07:46:25.192389155Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-14T07:46:25.200504362Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.115317ms grafana | logger=migrator t=2025-06-14T07:46:25.203699853Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-14T07:46:25.204084196Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=383.973µs grafana | logger=migrator t=2025-06-14T07:46:25.20863947Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-14T07:46:25.209569219Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=929.229µs grafana | logger=migrator t=2025-06-14T07:46:25.214257244Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-14T07:46:25.21489816Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=652.766µs grafana | logger=migrator t=2025-06-14T07:46:25.218738006Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-14T07:46:25.220412012Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.676146ms grafana | logger=migrator t=2025-06-14T07:46:25.22435314Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-14T07:46:25.224972746Z level=info msg="Migration successfully executed" id="create secrets table" duration=619.156µs grafana | logger=migrator t=2025-06-14T07:46:25.230771142Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-14T07:46:25.263610174Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.839642ms grafana | logger=migrator t=2025-06-14T07:46:25.285390483Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-14T07:46:25.296434228Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.044645ms grafana | logger=migrator t=2025-06-14T07:46:25.300112713Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-14T07:46:25.300224184Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=111.001µs grafana | logger=migrator t=2025-06-14T07:46:25.306192451Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-14T07:46:25.339092975Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.899904ms grafana | logger=migrator t=2025-06-14T07:46:25.342310015Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-14T07:46:25.373008429Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.697734ms grafana | logger=migrator t=2025-06-14T07:46:25.376103889Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-14T07:46:25.377097438Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=989.989µs grafana | logger=migrator t=2025-06-14T07:46:25.382831183Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-14T07:46:25.384459009Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.626416ms grafana | logger=migrator t=2025-06-14T07:46:25.388293835Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-14T07:46:25.38887641Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=582.555µs grafana | logger=migrator t=2025-06-14T07:46:25.428244206Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-14T07:46:25.430257926Z level=info msg="Migration successfully executed" id="create permission table" duration=2.01755ms grafana | logger=migrator t=2025-06-14T07:46:25.436529295Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-14T07:46:25.437622276Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.093371ms grafana | logger=migrator t=2025-06-14T07:46:25.441126659Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-14T07:46:25.4423135Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.186291ms grafana | logger=migrator t=2025-06-14T07:46:25.445546842Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-14T07:46:25.446578942Z level=info msg="Migration successfully executed" id="create role table" duration=1.03083ms grafana | logger=migrator t=2025-06-14T07:46:25.452396857Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-14T07:46:25.460560675Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.163918ms grafana | logger=migrator t=2025-06-14T07:46:25.465106288Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-14T07:46:25.473093734Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.991046ms grafana | logger=migrator t=2025-06-14T07:46:25.477650808Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-14T07:46:25.478504426Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=853.158µs grafana | logger=migrator t=2025-06-14T07:46:25.483202071Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-14T07:46:25.484420782Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.217591ms grafana | logger=migrator t=2025-06-14T07:46:25.489179738Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:25.49046417Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.284112ms grafana | logger=migrator t=2025-06-14T07:46:25.495818571Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-14T07:46:25.496862572Z level=info msg="Migration successfully executed" id="create team role table" duration=1.047211ms grafana | logger=migrator t=2025-06-14T07:46:25.502467445Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-14T07:46:25.503647396Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.179461ms grafana | logger=migrator t=2025-06-14T07:46:25.50930175Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-14T07:46:25.510650574Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.347044ms grafana | logger=migrator t=2025-06-14T07:46:25.516224377Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-14T07:46:25.518354596Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.132749ms grafana | logger=migrator t=2025-06-14T07:46:25.523829399Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-14T07:46:25.524807079Z level=info msg="Migration successfully executed" id="create user role table" duration=977.14µs grafana | logger=migrator t=2025-06-14T07:46:25.527753227Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-14T07:46:25.529282381Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.527204ms grafana | logger=migrator t=2025-06-14T07:46:25.553105959Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-14T07:46:25.554972536Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.865337ms grafana | logger=migrator t=2025-06-14T07:46:25.560105785Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-14T07:46:25.562070714Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.959849ms grafana | logger=migrator t=2025-06-14T07:46:25.565993081Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-14T07:46:25.567546756Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.553585ms grafana | logger=migrator t=2025-06-14T07:46:25.571591924Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-14T07:46:25.572874347Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.281693ms grafana | logger=migrator t=2025-06-14T07:46:25.578148268Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-14T07:46:25.579320969Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.171981ms grafana | logger=migrator t=2025-06-14T07:46:25.585701329Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-14T07:46:25.594849097Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.147778ms grafana | logger=migrator t=2025-06-14T07:46:25.599109488Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-14T07:46:25.599938605Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=828.837µs grafana | logger=migrator t=2025-06-14T07:46:25.605845912Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-14T07:46:25.608371406Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.519023ms grafana | logger=migrator t=2025-06-14T07:46:25.612116112Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:25.613712567Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.597435ms grafana | logger=migrator t=2025-06-14T07:46:25.61714712Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-14T07:46:25.6182206Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.0731ms grafana | logger=migrator t=2025-06-14T07:46:25.623062597Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-14T07:46:25.623853624Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=790.607µs grafana | logger=migrator t=2025-06-14T07:46:25.627581919Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-14T07:46:25.628749341Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.167052ms grafana | logger=migrator t=2025-06-14T07:46:25.633636497Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-14T07:46:25.642312761Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.671324ms grafana | logger=migrator t=2025-06-14T07:46:25.645770413Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-14T07:46:25.651521658Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.750325ms grafana | logger=migrator t=2025-06-14T07:46:25.654990061Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-14T07:46:25.660705665Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.714404ms grafana | logger=migrator t=2025-06-14T07:46:25.702740447Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-14T07:46:25.713187257Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.46111ms grafana | logger=migrator t=2025-06-14T07:46:25.716798861Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-14T07:46:25.717753661Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=951.74µs grafana | logger=migrator t=2025-06-14T07:46:25.725474074Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-14T07:46:25.726909848Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.438044ms grafana | logger=migrator t=2025-06-14T07:46:25.733932005Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-14T07:46:25.735901314Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.9762ms grafana | logger=migrator t=2025-06-14T07:46:25.741126404Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-14T07:46:25.749631675Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.504491ms grafana | logger=migrator t=2025-06-14T07:46:25.754361381Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-14T07:46:25.755689023Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.327533ms grafana | logger=migrator t=2025-06-14T07:46:25.76063271Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-14T07:46:25.761610449Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=977.209µs grafana | logger=migrator t=2025-06-14T07:46:25.76587397Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-14T07:46:25.766878509Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.004209ms grafana | logger=migrator t=2025-06-14T07:46:25.771307782Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-14T07:46:25.772539753Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.231091ms grafana | logger=migrator t=2025-06-14T07:46:25.780161306Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-14T07:46:25.780318548Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=158.042µs grafana | logger=migrator t=2025-06-14T07:46:25.784925491Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-14T07:46:25.786282305Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.356253ms grafana | logger=migrator t=2025-06-14T07:46:25.790383843Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-14T07:46:25.790531795Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=148.061µs grafana | logger=migrator t=2025-06-14T07:46:25.79624751Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-14T07:46:25.796828285Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=580.715µs grafana | logger=migrator t=2025-06-14T07:46:25.801990845Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-14T07:46:25.802772522Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=781.477µs grafana | logger=migrator t=2025-06-14T07:46:25.852974582Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-14T07:46:25.854691677Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.764996ms grafana | logger=migrator t=2025-06-14T07:46:25.859367612Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-14T07:46:25.859697855Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=330.893µs grafana | logger=migrator t=2025-06-14T07:46:25.865788493Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-14T07:46:25.86643519Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=649.017µs grafana | logger=migrator t=2025-06-14T07:46:25.870116405Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-14T07:46:25.871233095Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.11552ms grafana | logger=migrator t=2025-06-14T07:46:25.874743709Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-14T07:46:25.876325114Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.580305ms grafana | logger=migrator t=2025-06-14T07:46:25.884721324Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-14T07:46:25.891339637Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.621173ms grafana | logger=migrator t=2025-06-14T07:46:25.895392016Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-14T07:46:25.895415037Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=22.511µs grafana | logger=migrator t=2025-06-14T07:46:25.899006251Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-14T07:46:25.900353424Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.345663ms grafana | logger=migrator t=2025-06-14T07:46:25.904259981Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-14T07:46:25.905970897Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.709936ms grafana | logger=migrator t=2025-06-14T07:46:25.912194967Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-14T07:46:25.913270416Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.074519ms grafana | logger=migrator t=2025-06-14T07:46:25.918464236Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-14T07:46:25.928640834Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.176388ms grafana | logger=migrator t=2025-06-14T07:46:25.932532421Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:25.933576791Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.04406ms grafana | logger=migrator t=2025-06-14T07:46:25.938560278Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:25.93972783Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.165922ms grafana | logger=migrator t=2025-06-14T07:46:25.945241422Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:25.966091231Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.850569ms grafana | logger=migrator t=2025-06-14T07:46:25.996732274Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-14T07:46:25.998489751Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.756537ms grafana | logger=migrator t=2025-06-14T07:46:26.003998583Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.005691649Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.692356ms grafana | logger=migrator t=2025-06-14T07:46:26.009735728Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.010772388Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.0361ms grafana | logger=migrator t=2025-06-14T07:46:26.015466943Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-14T07:46:26.017237549Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.769286ms grafana | logger=migrator t=2025-06-14T07:46:26.023322318Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:26.023791132Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=470.884µs grafana | logger=migrator t=2025-06-14T07:46:26.028163894Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:26.029310195Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.145742ms grafana | logger=migrator t=2025-06-14T07:46:26.032770178Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-14T07:46:26.041155188Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.38505ms grafana | logger=migrator t=2025-06-14T07:46:26.046986013Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-14T07:46:26.055188612Z level=info msg="Migration successfully executed" id="add type column" duration=8.201449ms grafana | logger=migrator t=2025-06-14T07:46:26.058962587Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-14T07:46:26.059698625Z level=info msg="Migration successfully executed" id="create entity_events table" duration=735.508µs grafana | logger=migrator t=2025-06-14T07:46:26.062829065Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-14T07:46:26.063588162Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=758.397µs grafana | logger=migrator t=2025-06-14T07:46:26.067777251Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.068313177Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.071702749Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.072241854Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.077295222Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-14T07:46:26.078559094Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.263212ms grafana | logger=migrator t=2025-06-14T07:46:26.083162168Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-14T07:46:26.084555421Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.391843ms grafana | logger=migrator t=2025-06-14T07:46:26.088096546Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.089252816Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.15555ms grafana | logger=migrator t=2025-06-14T07:46:26.094208564Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T07:46:26.095416665Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.207721ms grafana | logger=migrator t=2025-06-14T07:46:26.099936069Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.101051169Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.11456ms grafana | logger=migrator t=2025-06-14T07:46:26.127738674Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.130068176Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.329382ms grafana | logger=migrator t=2025-06-14T07:46:26.135575739Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-14T07:46:26.136898241Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.326592ms grafana | logger=migrator t=2025-06-14T07:46:26.141525146Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-14T07:46:26.142721347Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.19555ms grafana | logger=migrator t=2025-06-14T07:46:26.149451681Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.150600462Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.148021ms grafana | logger=migrator t=2025-06-14T07:46:26.155755681Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:26.158198575Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.442094ms grafana | logger=migrator t=2025-06-14T07:46:26.162445995Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-14T07:46:26.163623586Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.177481ms grafana | logger=migrator t=2025-06-14T07:46:26.168525032Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-14T07:46:26.189362962Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.83778ms grafana | logger=migrator t=2025-06-14T07:46:26.193060457Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-14T07:46:26.202259805Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.198947ms grafana | logger=migrator t=2025-06-14T07:46:26.215941715Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-14T07:46:26.227430075Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=11.48893ms grafana | logger=migrator t=2025-06-14T07:46:26.280120738Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-14T07:46:26.280780865Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=664.907µs grafana | logger=migrator t=2025-06-14T07:46:26.286186285Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-14T07:46:26.295370314Z level=info msg="Migration successfully executed" id="add share column" duration=9.183779ms grafana | logger=migrator t=2025-06-14T07:46:26.301040338Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-14T07:46:26.30125863Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=220.202µs grafana | logger=migrator t=2025-06-14T07:46:26.305682251Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-14T07:46:26.307437008Z level=info msg="Migration successfully executed" id="create file table" duration=1.754027ms grafana | logger=migrator t=2025-06-14T07:46:26.311861811Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-14T07:46:26.314133242Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.281561ms grafana | logger=migrator t=2025-06-14T07:46:26.317912729Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-14T07:46:26.321688404Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=3.775745ms grafana | logger=migrator t=2025-06-14T07:46:26.324728453Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-14T07:46:26.325692473Z level=info msg="Migration successfully executed" id="create file_meta table" duration=955.77µs grafana | logger=migrator t=2025-06-14T07:46:26.333501707Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-14T07:46:26.334385536Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=883.059µs grafana | logger=migrator t=2025-06-14T07:46:26.338081891Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-14T07:46:26.338125481Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=48.33µs grafana | logger=migrator t=2025-06-14T07:46:26.344185559Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-14T07:46:26.34423907Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=55.101µs grafana | logger=migrator t=2025-06-14T07:46:26.34840636Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-14T07:46:26.34948878Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.08132ms grafana | logger=migrator t=2025-06-14T07:46:26.353590079Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-14T07:46:26.353897492Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=306.583µs grafana | logger=migrator t=2025-06-14T07:46:26.359250373Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-14T07:46:26.360746517Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.495154ms grafana | logger=migrator t=2025-06-14T07:46:26.369206288Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-14T07:46:26.378424735Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.187617ms grafana | logger=migrator t=2025-06-14T07:46:26.383282462Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-14T07:46:26.383454994Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=171.512µs grafana | logger=migrator t=2025-06-14T07:46:26.388094098Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-14T07:46:26.389975506Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.880687ms grafana | logger=migrator t=2025-06-14T07:46:26.43652376Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-14T07:46:26.437462899Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=944.559µs grafana | logger=migrator t=2025-06-14T07:46:26.441012523Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-14T07:46:26.441369087Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=357.094µs grafana | logger=migrator t=2025-06-14T07:46:26.444743369Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-14T07:46:26.445275424Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=531.125µs grafana | logger=migrator t=2025-06-14T07:46:26.451105459Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-14T07:46:26.460481329Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.37508ms grafana | logger=migrator t=2025-06-14T07:46:26.46483467Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-14T07:46:26.473971557Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.136047ms grafana | logger=migrator t=2025-06-14T07:46:26.477007367Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-14T07:46:26.478792063Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.791456ms grafana | logger=migrator t=2025-06-14T07:46:26.490415034Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-14T07:46:26.564092348Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.676644ms grafana | logger=migrator t=2025-06-14T07:46:26.574321575Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-14T07:46:26.575813489Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.491524ms grafana | logger=migrator t=2025-06-14T07:46:26.581743266Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-14T07:46:26.582998277Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.257681ms grafana | logger=migrator t=2025-06-14T07:46:26.588193487Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-14T07:46:26.613701541Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.507134ms grafana | logger=migrator t=2025-06-14T07:46:26.618818039Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-14T07:46:26.626085229Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.27001ms grafana | logger=migrator t=2025-06-14T07:46:26.631926655Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-14T07:46:26.632252338Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=325.273µs grafana | logger=migrator t=2025-06-14T07:46:26.636064004Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-14T07:46:26.636277506Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=225.852µs grafana | logger=migrator t=2025-06-14T07:46:26.644923058Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-14T07:46:26.645470584Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=546.406µs grafana | logger=migrator t=2025-06-14T07:46:26.649554933Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-14T07:46:26.649980197Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=424.053µs grafana | logger=migrator t=2025-06-14T07:46:26.654088406Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-14T07:46:26.654358789Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=269.943µs grafana | logger=migrator t=2025-06-14T07:46:26.659104764Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-14T07:46:26.660223755Z level=info msg="Migration successfully executed" id="create folder table" duration=1.118751ms grafana | logger=migrator t=2025-06-14T07:46:26.664022401Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-14T07:46:26.666978589Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.955918ms grafana | logger=migrator t=2025-06-14T07:46:26.670618164Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-14T07:46:26.671904396Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.285672ms grafana | logger=migrator t=2025-06-14T07:46:26.67640764Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-14T07:46:26.67643786Z level=info msg="Migration successfully executed" id="Update folder title length" duration=29.99µs grafana | logger=migrator t=2025-06-14T07:46:26.725644389Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-14T07:46:26.726925761Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.280932ms grafana | logger=migrator t=2025-06-14T07:46:26.732507775Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-14T07:46:26.733639965Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.13168ms grafana | logger=migrator t=2025-06-14T07:46:26.739848314Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-14T07:46:26.741001726Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.153011ms grafana | logger=migrator t=2025-06-14T07:46:26.744581549Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-14T07:46:26.745013143Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=431.104µs grafana | logger=migrator t=2025-06-14T07:46:26.751213763Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-14T07:46:26.751484746Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=270.613µs grafana | logger=migrator t=2025-06-14T07:46:26.75926389Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-14T07:46:26.760534532Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.273132ms grafana | logger=migrator t=2025-06-14T07:46:26.764239857Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-14T07:46:26.765241706Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.001479ms grafana | logger=migrator t=2025-06-14T07:46:26.770475967Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-14T07:46:26.771672419Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.195041ms grafana | logger=migrator t=2025-06-14T07:46:26.775971309Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-14T07:46:26.77810148Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.13303ms grafana | logger=migrator t=2025-06-14T07:46:26.7844646Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-14T07:46:26.785652292Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.187502ms grafana | logger=migrator t=2025-06-14T07:46:26.790078343Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-14T07:46:26.791174795Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.095911ms grafana | logger=migrator t=2025-06-14T07:46:26.799380563Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-14T07:46:26.800077529Z level=info msg="Migration successfully executed" id="create anon_device table" duration=696.076µs grafana | logger=migrator t=2025-06-14T07:46:26.806380599Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-14T07:46:26.808135136Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.737647ms grafana | logger=migrator t=2025-06-14T07:46:26.811959482Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-14T07:46:26.813508408Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.549386ms grafana | logger=migrator t=2025-06-14T07:46:26.820296162Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-14T07:46:26.821779336Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.484674ms grafana | logger=migrator t=2025-06-14T07:46:26.826814854Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-14T07:46:26.828007016Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.193322ms grafana | logger=migrator t=2025-06-14T07:46:26.890787454Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-14T07:46:26.893160357Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.370763ms grafana | logger=migrator t=2025-06-14T07:46:26.899413767Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-14T07:46:26.899897402Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=484.205µs grafana | logger=migrator t=2025-06-14T07:46:26.904707527Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-14T07:46:26.912375471Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.666824ms grafana | logger=migrator t=2025-06-14T07:46:26.921091764Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-14T07:46:26.923026043Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.930499ms grafana | logger=migrator t=2025-06-14T07:46:26.92908876Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-14T07:46:26.929120941Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=33.431µs grafana | logger=migrator t=2025-06-14T07:46:26.93333381Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-14T07:46:26.935433361Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.099271ms grafana | logger=migrator t=2025-06-14T07:46:26.940299047Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-14T07:46:26.940327118Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=29.091µs grafana | logger=migrator t=2025-06-14T07:46:26.945652548Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-14T07:46:26.947084442Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.430514ms grafana | logger=migrator t=2025-06-14T07:46:26.952463233Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-14T07:46:26.953749026Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.285283ms grafana | logger=migrator t=2025-06-14T07:46:26.958365199Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-14T07:46:26.959563201Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.197352ms grafana | logger=migrator t=2025-06-14T07:46:26.964085454Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-14T07:46:26.965190275Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.104181ms grafana | logger=migrator t=2025-06-14T07:46:26.969015681Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-14T07:46:26.969969971Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=954.24µs grafana | logger=migrator t=2025-06-14T07:46:26.975136049Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-14T07:46:26.980743513Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=5.608114ms grafana | logger=migrator t=2025-06-14T07:46:26.986042703Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-14T07:46:26.986867342Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=823.859µs grafana | logger=migrator t=2025-06-14T07:46:26.990692878Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-14T07:46:26.991722828Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.02745ms grafana | logger=migrator t=2025-06-14T07:46:27.027484079Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-14T07:46:27.029858042Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=2.372963ms grafana | logger=migrator t=2025-06-14T07:46:27.036763067Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-14T07:46:27.046574641Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.811234ms grafana | logger=migrator t=2025-06-14T07:46:27.053331755Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-14T07:46:27.063608554Z level=info msg="Migration successfully executed" id="add region_slug column" duration=10.276059ms grafana | logger=migrator t=2025-06-14T07:46:27.06742744Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-14T07:46:27.075952391Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.522771ms grafana | logger=migrator t=2025-06-14T07:46:27.091664991Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-14T07:46:27.102540145Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.873984ms grafana | logger=migrator t=2025-06-14T07:46:27.106140629Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-14T07:46:27.10627241Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=127.971µs grafana | logger=migrator t=2025-06-14T07:46:27.109628172Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-14T07:46:27.110495311Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=867.019µs grafana | logger=migrator t=2025-06-14T07:46:27.116790321Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-14T07:46:27.130763824Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=13.967183ms grafana | logger=migrator t=2025-06-14T07:46:27.151155348Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-14T07:46:27.151673033Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=514.115µs grafana | logger=migrator t=2025-06-14T07:46:27.156278997Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-14T07:46:27.157962164Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.683407ms grafana | logger=migrator t=2025-06-14T07:46:27.169277822Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:27.194591613Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.315211ms grafana | logger=migrator t=2025-06-14T07:46:27.198368979Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-14T07:46:27.199190426Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=818.497µs grafana | logger=migrator t=2025-06-14T07:46:27.205543517Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:27.208088552Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.549845ms grafana | logger=migrator t=2025-06-14T07:46:27.213969398Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:27.214509473Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=539.745µs grafana | logger=migrator t=2025-06-14T07:46:27.218079827Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:27.219495891Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.415024ms grafana | logger=migrator t=2025-06-14T07:46:27.227732479Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T07:46:27.254229851Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.495502ms grafana | logger=migrator t=2025-06-14T07:46:27.289488048Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-14T07:46:27.290593798Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.10684ms grafana | logger=migrator t=2025-06-14T07:46:27.295903979Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-14T07:46:27.29710107Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.196981ms grafana | logger=migrator t=2025-06-14T07:46:27.302539702Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-14T07:46:27.302786634Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=246.932µs grafana | logger=migrator t=2025-06-14T07:46:27.306622781Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-14T07:46:27.307265307Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=641.966µs grafana | logger=migrator t=2025-06-14T07:46:27.311469557Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-14T07:46:27.322023588Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.552891ms grafana | logger=migrator t=2025-06-14T07:46:27.32548186Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-14T07:46:27.334991822Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.509172ms grafana | logger=migrator t=2025-06-14T07:46:27.343340421Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-14T07:46:27.35162747Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=8.284649ms grafana | logger=migrator t=2025-06-14T07:46:27.356266744Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-14T07:46:27.365281141Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.013717ms grafana | logger=migrator t=2025-06-14T07:46:27.368711613Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-14T07:46:27.377296395Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=8.582842ms grafana | logger=migrator t=2025-06-14T07:46:27.380828638Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-14T07:46:27.389828244Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=8.999636ms grafana | logger=migrator t=2025-06-14T07:46:27.431874245Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-14T07:46:27.433620372Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.744817ms grafana | logger=migrator t=2025-06-14T07:46:27.438665471Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-14T07:46:27.474793085Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.127314ms grafana | logger=migrator t=2025-06-14T07:46:27.478770823Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-14T07:46:27.488562157Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.790424ms grafana | logger=migrator t=2025-06-14T07:46:27.493628364Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-14T07:46:27.502888593Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.262559ms grafana | logger=migrator t=2025-06-14T07:46:27.506476097Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-14T07:46:27.515772896Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=9.296149ms grafana | logger=migrator t=2025-06-14T07:46:27.520031287Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-14T07:46:27.527636749Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=7.604202ms grafana | logger=migrator t=2025-06-14T07:46:27.532465885Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-14T07:46:27.532485495Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=20.82µs grafana | logger=migrator t=2025-06-14T07:46:27.536015789Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-14T07:46:27.53603555Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=20.121µs grafana | logger=migrator t=2025-06-14T07:46:27.566798903Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:27.577328963Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.5304ms grafana | logger=migrator t=2025-06-14T07:46:27.580845027Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.590101715Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.255828ms grafana | logger=migrator t=2025-06-14T07:46:27.595366635Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-14T07:46:27.595615057Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=254.322µs grafana | logger=migrator t=2025-06-14T07:46:27.598627266Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-14T07:46:27.598805768Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=178.322µs grafana | logger=migrator t=2025-06-14T07:46:27.603729915Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:27.614315396Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.585911ms grafana | logger=migrator t=2025-06-14T07:46:27.617669928Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.625569203Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.898705ms grafana | logger=migrator t=2025-06-14T07:46:27.632490499Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-14T07:46:27.64212376Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.633601ms grafana | logger=migrator t=2025-06-14T07:46:27.645294471Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-14T07:46:27.6546433Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.349349ms grafana | logger=migrator t=2025-06-14T07:46:27.659198594Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-14T07:46:27.659624768Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=426.224µs grafana | logger=migrator t=2025-06-14T07:46:27.665777666Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:27.676958313Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=11.187287ms grafana | logger=migrator t=2025-06-14T07:46:27.708200101Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.720777141Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=12.57699ms grafana | logger=migrator t=2025-06-14T07:46:27.724326325Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-14T07:46:27.724499506Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=178.051µs grafana | logger=migrator t=2025-06-14T07:46:27.728710176Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-14T07:46:27.729081711Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=370.954µs grafana | logger=migrator t=2025-06-14T07:46:27.733122079Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-14T07:46:27.734821175Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.697926ms grafana | logger=migrator t=2025-06-14T07:46:27.738743102Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-14T07:46:27.738771992Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=30.47µs grafana | logger=migrator t=2025-06-14T07:46:27.743570989Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-14T07:46:27.743600459Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=31.251µs grafana | logger=migrator t=2025-06-14T07:46:27.748688797Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-14T07:46:27.749050341Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=360.374µs grafana | logger=migrator t=2025-06-14T07:46:27.752247661Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.762141605Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=9.892544ms grafana | logger=migrator t=2025-06-14T07:46:27.765301035Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:27.772662196Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.359261ms grafana | logger=migrator t=2025-06-14T07:46:27.776922317Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-14T07:46:27.777921076Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=998.179µs grafana | logger=migrator t=2025-06-14T07:46:27.78563271Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-14T07:46:27.786866441Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.232531ms grafana | logger=migrator t=2025-06-14T07:46:27.790504785Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-14T07:46:27.800302049Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=9.796464ms grafana | logger=migrator t=2025-06-14T07:46:27.804689351Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.814694916Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=10.004926ms grafana | logger=migrator t=2025-06-14T07:46:27.853249774Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-14T07:46:27.853275364Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-14T07:46:27.853526447Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-14T07:46:27.853547317Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=305.893µs grafana | logger=migrator t=2025-06-14T07:46:27.857860028Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-14T07:46:27.858721266Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=860.768µs grafana | logger=migrator t=2025-06-14T07:46:27.863171408Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-14T07:46:27.864256729Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.085131ms grafana | logger=migrator t=2025-06-14T07:46:27.868669931Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-14T07:46:27.870215796Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.544195ms grafana | logger=migrator t=2025-06-14T07:46:27.873941781Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-14T07:46:27.875815829Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.882438ms grafana | logger=migrator t=2025-06-14T07:46:27.881318422Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-14T07:46:27.882459242Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.14048ms grafana | logger=migrator t=2025-06-14T07:46:27.889906494Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:27.901703276Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.797432ms grafana | logger=migrator t=2025-06-14T07:46:27.904994907Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:27.913512078Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=8.515961ms grafana | logger=migrator t=2025-06-14T07:46:27.918051682Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-14T07:46:27.927808045Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.754943ms grafana | logger=migrator t=2025-06-14T07:46:27.933466129Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-14T07:46:27.942264544Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=8.797385ms grafana | logger=migrator t=2025-06-14T07:46:27.946806266Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-14T07:46:27.946932028Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-14T07:46:27.946939178Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=133.312µs grafana | logger=migrator t=2025-06-14T07:46:27.951050387Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-14T07:46:27.95336485Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=2.317753ms grafana | logger=migrator t=2025-06-14T07:46:27.957081925Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.580498067s grafana | logger=migrator t=2025-06-14T07:46:27.957758211Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-14T07:46:27.972433381Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-14T07:46:27.972697113Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-14T07:46:27.981783391Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-14T07:46:28.071048701Z level=info msg="Restored cache from database" duration=498.056µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.079471161Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-14T07:46:28.079486331Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-14T07:46:28.086824061Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-14T07:46:28.087573868Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=748.867µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.122515521Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-14T07:46:28.122542732Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=28.001µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.127517889Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-14T07:46:28.127651361Z level=info msg="Migration successfully executed" id="drop table resource" duration=133.341µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.131732189Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-14T07:46:28.133477336Z level=info msg="Migration successfully executed" id="create table resource" duration=1.745377ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.137549694Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-14T07:46:28.138776607Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.223813ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.144282179Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.14437873Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=98.551µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.147898583Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.149061374Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.14856ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.154471655Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-14T07:46:28.156781617Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.309362ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.162697124Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-14T07:46:28.163662644Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=965.91µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.172838181Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-14T07:46:28.173045173Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=206.982µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.179696736Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-14T07:46:28.181263081Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.565455ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.186119408Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-14T07:46:28.187392189Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.275031ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.190804452Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-14T07:46:28.190998093Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=192.781µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.194003042Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-14T07:46:28.195345855Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.341453ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.200165712Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-14T07:46:28.202915018Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.758176ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.20944489Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-14T07:46:28.210703451Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.258741ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.21581823Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.22620888Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.38657ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.251208108Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-14T07:46:28.264899198Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=13.6921ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.269728835Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-14T07:46:28.270647643Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=918.688µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.276186316Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-14T07:46:28.278439878Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=2.253472ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.28392823Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.294669962Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.741052ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.300343346Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-14T07:46:28.311091808Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.746932ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.316704422Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-14T07:46:28.316734622Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-14T07:46:28.317204496Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=499.044µs grafana | logger=resource-migrator t=2025-06-14T07:46:28.320772251Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-14T07:46:28.322471717Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.697176ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.328166901Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.339085035Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.918434ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.344006052Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-14T07:46:28.346153243Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.145761ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.352702795Z level=info msg="migrations completed" performed=26 skipped=0 duration=265.932705ms grafana | logger=resource-migrator t=2025-06-14T07:46:28.353494943Z level=info msg="Unlocking database" grafana | t=2025-06-14T07:46:28.353744285Z level=info caller=logger.go:214 time=2025-06-14T07:46:28.353718155Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-14T07:46:28.365822791Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-14T07:46:28.40887542Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-14T07:46:28.408903831Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-14T07:46:28.408983321Z level=info msg="Plugins loaded" count=53 duration=43.16472ms grafana | logger=query_data t=2025-06-14T07:46:28.415702606Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-14T07:46:28.430937491Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-14T07:46:28.443990986Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-14T07:46:28.454165362Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-14T07:46:28.454199212Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-14T07:46:28.458566225Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-14T07:46:28.459092369Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-14T07:46:28.45917353Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2025-06-14T07:46:28.461825625Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:28.463409031Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=http.server t=2025-06-14T07:46:28.464095287Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.state.manager t=2025-06-14T07:46:28.548989776Z level=info msg="State cache has been initialized" states=0 duration=89.889507ms grafana | logger=ngalert.scheduler t=2025-06-14T07:46:28.549047667Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-14T07:46:28.549229878Z level=info msg=starting first_tick=2025-06-14T07:46:30Z grafana | logger=plugins.update.checker t=2025-06-14T07:46:28.561284464Z level=info msg="Update check succeeded" duration=100.359447ms grafana | logger=provisioning.datasources t=2025-06-14T07:46:28.639110505Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-14T07:46:28.650778937Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=provisioning.alerting t=2025-06-14T07:46:28.708365916Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-14T07:46:28.708403256Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-14T07:46:28.710100372Z level=info msg="starting to provision dashboards" grafana | logger=grafana.update.checker t=2025-06-14T07:46:28.753684617Z level=info msg="Update check succeeded" duration=292.854661ms grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.785716423Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.786947714Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.788815833Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.792077363Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.79598575Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.800368163Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.810459889Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.811257426Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-14T07:46:28.811304317Z level=info msg="Patterns update finished" duration=129.297932ms grafana | logger=grafana-apiserver t=2025-06-14T07:46:28.811911752Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-14T07:46:28.872987245Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-14T07:46:28.95225721Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-14T07:46:29.094572147Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-14T07:46:29.130439608Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.130517659Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=667.039427ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.13066051Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-14T07:46:29.338404239Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-14T07:46:29.393349463Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-14T07:46:29.414472334Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.414544555Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=283.843734ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.414576635Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=provisioning.dashboard t=2025-06-14T07:46:29.439431602Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-14T07:46:29.584685276Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-14T07:46:29.652802874Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-14T07:46:29.669973318Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.670000588Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=255.394713ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.670058549Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-14T07:46:29.85904718Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-14T07:46:29.923161801Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-14T07:46:29.945495964Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T07:46:29.945522254Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=275.448455ms grafana | logger=infra.usagestats t=2025-06-14T07:48:00.469144502Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-14 07:46:24,169] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,170] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,173] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,176] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-14 07:46:24,181] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-14 07:46:24,188] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:24,222] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:24,223] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:24,232] INFO Socket connection established, initiating session, client: /172.17.0.7:37154, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:24,264] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000027a370000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:24,392] INFO Session: 0x10000027a370000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:24,392] INFO EventThread shut down for session: 0x10000027a370000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-14 07:46:25,146] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-14 07:46:25,457] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-14 07:46:25,550] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-14 07:46:25,551] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-14 07:46:25,552] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-14 07:46:25,565] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 07:46:25,569] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,569] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,570] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,571] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 07:46:25,576] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-14 07:46:25,582] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:25,584] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 07:46:25,590] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:25,597] INFO Socket connection established, initiating session, client: /172.17.0.7:37156, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:25,608] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000027a370001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 07:46:25,613] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 07:46:26,018] INFO Cluster ID = 39Nu5_8lRbaMBkBXvQZwoQ (kafka.server.KafkaServer) kafka | [2025-06-14 07:46:26,021] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-14 07:46:26,070] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-14 07:46:26,105] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 07:46:26,108] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 07:46:26,106] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 07:46:26,108] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 07:46:26,147] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-14 07:46:26,151] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-14 07:46:26,165] INFO Loaded 0 logs in 18ms. (kafka.log.LogManager) kafka | [2025-06-14 07:46:26,166] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-14 07:46:26,168] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-14 07:46:26,185] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-14 07:46:26,234] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-14 07:46:26,252] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-14 07:46:26,267] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-14 07:46:26,312] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 07:46:26,725] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-14 07:46:26,731] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-14 07:46:26,757] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-14 07:46:26,758] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-14 07:46:26,758] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-14 07:46:26,763] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-14 07:46:26,767] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 07:46:26,791] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,794] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,796] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,796] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,814] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-14 07:46:26,845] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-14 07:46:26,900] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749887186862,1749887186862,1,0,0,72057604678287361,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-14 07:46:26,902] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-14 07:46:26,967] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-14 07:46:26,984] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,984] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:26,984] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:27,000] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:46:27,028] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-14 07:46:27,031] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:46:27,049] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,050] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-14 07:46:27,058] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-14 07:46:27,059] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-14 07:46:27,060] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,067] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-14 07:46:27,113] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 07:46:27,119] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-14 07:46:27,120] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,130] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,136] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,139] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,147] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-14 07:46:27,163] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-14 07:46:27,166] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,176] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,179] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 07:46:27,179] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 07:46:27,179] INFO Kafka startTimeMs: 1749887187173 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 07:46:27,182] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-14 07:46:27,186] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-14 07:46:27,200] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-14 07:46:27,201] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,201] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,202] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,202] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,207] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,207] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,207] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,208] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-14 07:46:27,209] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,213] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-14 07:46:27,220] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 07:46:27,220] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 07:46:27,225] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 07:46:27,225] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 07:46:27,225] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 07:46:27,226] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 07:46:27,231] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 07:46:27,231] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,252] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-14 07:46:27,262] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,262] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,262] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,263] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,264] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,284] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:27,341] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 07:46:27,354] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 07:46:27,376] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 07:46:32,286] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-14 07:46:32,288] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,189] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-14 07:47:00,197] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,202] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,214] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-14 07:47:00,232] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(uR2ret0KRuqqpcwCtr8EXQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,235] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,241] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,241] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,252] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,252] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,286] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,289] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 07:47:00,290] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-14 07:47:00,293] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,294] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,300] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(de09aY6VRG29YXJ5fJaj4A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,300] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,301] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 07:47:00,302] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,304] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,305] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 07:47:00,306] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,336] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 07:47:00,337] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-14 07:47:00,337] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,475] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 07:47:00,486] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 07:47:00,487] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 07:47:00,488] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 07:47:00,490] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,492] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,493] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,494] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(uR2ret0KRuqqpcwCtr8EXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 07:47:00,498] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 07:47:00,498] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-14 07:47:00,499] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,502] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,502] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,502] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,506] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 07:47:00,507] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,521] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 07:47:00,529] INFO [Broker id=1] Finished LeaderAndIsr request in 226ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,533] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=uR2ret0KRuqqpcwCtr8EXQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 07:47:00,541] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 07:47:00,542] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 07:47:00,543] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 07:47:00,548] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,549] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,550] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,553] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,555] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,556] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,556] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,556] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 07:47:00,586] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 07:47:00,587] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 07:47:00,588] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 07:47:00,589] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 07:47:00,590] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 07:47:00,591] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 07:47:00,592] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 07:47:00,593] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 07:47:00,593] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 07:47:00,594] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-14 07:47:00,594] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2025-06-14 07:47:00,606] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,608] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,609] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,609] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,609] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,623] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,626] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,626] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,626] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,626] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,642] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,644] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,644] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,644] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,644] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,660] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,661] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,662] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,662] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,663] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,675] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,676] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,676] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,677] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,677] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,692] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,693] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,693] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,693] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,693] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,705] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,706] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,706] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,706] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,706] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,714] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,715] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,715] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,715] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,715] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,723] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,723] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,723] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,723] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,724] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,732] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,735] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,735] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,736] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,736] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,748] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,749] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,749] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,749] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,749] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,757] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,759] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,760] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,760] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,766] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,775] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,776] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,776] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,776] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,777] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,785] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,785] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,785] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,785] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,785] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,796] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,798] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,798] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,798] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,798] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,809] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,810] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,810] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,810] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,810] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,819] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,820] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,820] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,820] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,820] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,830] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,833] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,833] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,833] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,833] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,843] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,845] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,845] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,845] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,845] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,853] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,854] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,854] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,854] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,854] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,865] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,866] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,866] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,867] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,867] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,875] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,876] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,876] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,876] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,876] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,885] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,886] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,886] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,886] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,886] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,899] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,900] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,900] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,900] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,900] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,917] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,919] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,919] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,919] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,919] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,929] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,930] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,930] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,930] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,930] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,946] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,947] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,947] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,947] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,947] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,967] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,968] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,968] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,968] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,968] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,979] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,980] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,980] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,980] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,980] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,988] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,989] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,989] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,989] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,989] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:00,996] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:00,997] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:00,997] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,997] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:00,997] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,003] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,004] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,004] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,004] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,004] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,011] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,011] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,011] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,011] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,011] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,016] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,017] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,017] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,017] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,017] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,022] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,023] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,023] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,023] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,023] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,034] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,036] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,036] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,036] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,036] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,046] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,047] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,047] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,047] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,047] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,057] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,057] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,057] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,057] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,057] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,071] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,072] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,072] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,072] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,072] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,085] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,086] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,086] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,086] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,086] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,094] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,095] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,095] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,095] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,095] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,104] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,104] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,104] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,104] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,104] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,113] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,113] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,113] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,113] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,113] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,123] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,124] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,124] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,124] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,124] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,132] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,132] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,132] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,133] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,133] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,140] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,141] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,141] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,141] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,141] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,147] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,148] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,148] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,148] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,148] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,157] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,158] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,158] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,158] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,158] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,166] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,166] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,166] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,166] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,167] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,175] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 07:47:01,175] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 07:47:01,175] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,175] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 07:47:01,176] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(de09aY6VRG29YXJ5fJaj4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 07:47:01,180] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 07:47:01,181] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 07:47:01,182] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,189] INFO [Broker id=1] Finished LeaderAndIsr request in 641ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-14 07:47:01,193] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=de09aY6VRG29YXJ5fJaj4A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 07:47:01,195] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,196] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,196] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,197] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,197] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,197] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,197] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,198] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,199] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,199] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,201] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,202] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,203] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 07:47:01,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,205] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,206] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,206] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,206] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,206] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-14 07:47:01,207] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 07:47:01,828] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group f3ce3b90-f339-4757-bafd-fc536e7a0824 in Empty state. Created a new member id consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,838] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,842] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:01,843] INFO [GroupCoordinator 1]: Preparing to rebalance group f3ce3b90-f339-4757-bafd-fc536e7a0824 in state PreparingRebalance with old generation 0 (__consumer_offsets-36) (reason: Adding new member consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9 with group instance id None; client reason: need to re-join with the given member-id: consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:03,160] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ca5c4804-cc38-4c67-847a-b6b5f5acab5c in Empty state. Created a new member id consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:03,165] INFO [GroupCoordinator 1]: Preparing to rebalance group ca5c4804-cc38-4c67-847a-b6b5f5acab5c in state PreparingRebalance with old generation 0 (__consumer_offsets-4) (reason: Adding new member consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444 with group instance id None; client reason: need to re-join with the given member-id: consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:04,857] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:04,862] INFO [GroupCoordinator 1]: Stabilized group f3ce3b90-f339-4757-bafd-fc536e7a0824 generation 1 (__consumer_offsets-36) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:04,887] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:04,899] INFO [GroupCoordinator 1]: Assignment received from leader consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9 for group f3ce3b90-f339-4757-bafd-fc536e7a0824 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:06,167] INFO [GroupCoordinator 1]: Stabilized group ca5c4804-cc38-4c67-847a-b6b5f5acab5c generation 1 (__consumer_offsets-4) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 07:47:06,185] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444 for group ca5c4804-cc38-4c67-847a-b6b5f5acab5c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-14T07:46:37.220+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-14T07:46:37.284+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-14T07:46:37.285+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-14T07:46:38.861+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-14T07:46:39.045+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 171 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-14T07:46:39.832+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-14T07:46:39.847+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-14T07:46:39.850+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-14T07:46:39.850+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-14T07:46:39.892+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-14T07:46:39.892+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2542 ms policy-api | [2025-06-14T07:46:40.302+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-14T07:46:40.395+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-14T07:46:40.450+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-14T07:46:40.910+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-14T07:46:40.958+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-14T07:46:41.198+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd policy-api | [2025-06-14T07:46:41.200+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-14T07:46:41.317+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-14T07:46:43.505+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-14T07:46:43.509+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-14T07:46:44.257+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-14T07:46:45.233+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-14T07:46:46.415+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-14T07:46:46.467+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-14T07:46:47.245+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-14T07:46:47.423+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-14T07:46:47.451+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-14T07:46:47.480+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.123 seconds (process running for 11.73) policy-api | [2025-06-14T07:47:39.928+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-14T07:47:39.928+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-14T07:47:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 2 ms policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.273019 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.333348 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.402243 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.48427 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.543296 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.627355 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.684556 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.762449 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.811483 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.86473 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.923752 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:23.971374 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.043004 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.095952 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.145702 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.214075 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.272458 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.335123 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.38731 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.467228 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.524357 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.611901 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.662036 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.713794 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.800051 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.856237 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.939322 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:24.988676 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.039777 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.08609 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.138357 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.20958 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.264591 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.335033 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.388652 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.478739 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.531116 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.603535 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.652733 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.724218 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.778433 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.879103 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:25.938845 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.018447 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.07136 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.145532 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.196989 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.254423 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.328725 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.39065 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.494512 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.550831 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.625326 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.680966 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.77377 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.828337 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:26.946602 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.017248 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.077197 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.144069 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.200236 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.249504 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.304268 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.36366 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.456764 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.509575 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.590742 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.653779 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.728027 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.781907 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.876978 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:27.936378 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.005894 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.062474 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.141728 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.197048 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.270742 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.322172 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.371953 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.444768 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.49263 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.561887 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.608569 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.660518 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.722951 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.773271 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.832567 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.893589 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:28.970381 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.024532 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.073302 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.129302 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.178921 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.240544 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.288963 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1406250746230800u | 1 | 2025-06-14 07:46:29.344072 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.415416 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.464953 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.543164 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.59265 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.655731 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.708705 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.781863 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.838683 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.891824 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:29.958649 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:30.012826 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:30.101705 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1406250746230900u | 1 | 2025-06-14 07:46:30.153589 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.211409 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.274199 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.350252 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.408247 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.470154 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.528107 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.585034 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.64331 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1406250746231000u | 1 | 2025-06-14 07:46:30.712084 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1406250746231100u | 1 | 2025-06-14 07:46:30.763474 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1406250746231200u | 1 | 2025-06-14 07:46:30.813957 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1406250746231200u | 1 | 2025-06-14 07:46:30.877295 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1406250746231200u | 1 | 2025-06-14 07:46:30.931953 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1406250746231200u | 1 | 2025-06-14 07:46:30.995029 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1406250746231300u | 1 | 2025-06-14 07:46:31.048982 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1406250746231300u | 1 | 2025-06-14 07:46:31.117283 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1406250746231300u | 1 | 2025-06-14 07:46:31.171179 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:31.842908 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:31.903583 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:31.963943 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.02055 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.077187 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.154423 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.209061 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.269338 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.330991 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.395025 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.44994 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.520121 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1406250746311400u | 1 | 2025-06-14 07:46:32.578551 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.648206 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.699003 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.776286 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.827077 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.879133 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.935069 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:32.990405 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1406250746311500u | 1 | 2025-06-14 07:46:33.041946 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1406250746311600u | 1 | 2025-06-14 07:46:33.09029 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1406250746311600u | 1 | 2025-06-14 07:46:33.144529 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1406250746311601u | 1 | 2025-06-14 07:46:33.191912 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1406250746311601u | 1 | 2025-06-14 07:46:33.241312 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1406250746311700u | 1 | 2025-06-14 07:46:33.303827 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1406250746311700u | 1 | 2025-06-14 07:46:33.360372 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1406250746311700u | 1 | 2025-06-14 07:46:33.429273 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.488542 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.552358 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.60566 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.65831 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.713362 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.770817 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.82369 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.891059 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1406250746311701u | 1 | 2025-06-14 07:46:33.943627 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1406250746341600u | 1 | 2025-06-14 07:46:34.632287 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1406250746351600u | 1 | 2025-06-14 07:46:35.299104 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1406250746351600u | 1 | 2025-06-14 07:46:35.385935 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-drools-pdp | Waiting for pap port 6969... policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | Connection to kafka (172.17.0.7) 9092 port [tcp/*] succeeded! policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | -- configure -- policy-drools-pdp | -- reload -- policy-drools-pdp | + set -x policy-drools-pdp | + systemConfs policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | -- systemConfs -- policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | -- maven -- policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | + set -x policy-drools-pdp | -- features -- policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | -- security -- policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | -- scripts -- policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | + set -x policy-drools-pdp | -- policy_exec -- policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- exec_start -- policy-drools-pdp | + echo '-- status --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | -- status -- policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | + _PID= policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | + set -x policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | -- preRunning -- policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.13.0.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.15.11.jar /opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.48.3.jar /opt/app/policy/lib/classgraph-4.8.179.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.10.1.jar /opt/app/policy/lib/commons-cli-1.9.0.jar /opt/app/policy/lib/commons-codec-1.18.0.jar /opt/app/policy/lib/commons-collections-3.2.2.jar /opt/app/policy/lib/commons-collections4-4.5.0-M3.jar /opt/app/policy/lib/commons-configuration2-2.11.0.jar /opt/app/policy/lib/commons-digester-2.1.jar /opt/app/policy/lib/commons-io-2.18.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.17.0.jar /opt/app/policy/lib/commons-logging-1.3.5.jar /opt/app/policy/lib/commons-net-3.11.1.jar /opt/app/policy/lib/commons-text-1.13.0.jar /opt/app/policy/lib/commons-validator-1.8.0.jar /opt/app/policy/lib/core-0.12.4.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.36.0.jar /opt/app/policy/lib/failureaccess-1.0.3.jar /opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-2.12.1.jar /opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.4.6-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/handy-uri-templates-2.1.8.jar /opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar /opt/app/policy/lib/hibernate-core-6.6.16.Final.jar /opt/app/policy/lib/hk2-api-3.0.6.jar /opt/app/policy/lib/hk2-locator-3.0.6.jar /opt/app/policy/lib/hk2-utils-3.0.6.jar /opt/app/policy/lib/httpclient-4.5.13.jar /opt/app/policy/lib/httpcore-4.4.15.jar /opt/app/policy/lib/icu4j-74.2.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-3.0.0.jar /opt/app/policy/lib/jackson-annotations-2.18.3.jar /opt/app/policy/lib/jackson-core-2.18.3.jar /opt/app/policy/lib/jackson-databind-2.18.3.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar /opt/app/policy/lib/jakarta.activation-api-2.1.3.jar /opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-2.6.1.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.1.1.jar /opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-3.2.0.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.30.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar /opt/app/policy/lib/jcodings-1.0.58.jar /opt/app/policy/lib/jersey-client-3.1.10.jar /opt/app/policy/lib/jersey-common-3.1.10.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar /opt/app/policy/lib/jersey-hk2-3.1.10.jar /opt/app/policy/lib/jersey-server-3.1.10.jar /opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar /opt/app/policy/lib/jetty-http-12.0.21.jar /opt/app/policy/lib/jetty-io-12.0.21.jar /opt/app/policy/lib/jetty-security-12.0.21.jar /opt/app/policy/lib/jetty-server-12.0.21.jar /opt/app/policy/lib/jetty-session-12.0.21.jar /opt/app/policy/lib/jetty-util-12.0.21.jar /opt/app/policy/lib/joda-time-2.10.2.jar /opt/app/policy/lib/joni-2.2.1.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsoup-1.17.2.jar /opt/app/policy/lib/jspecify-1.0.0.jar /opt/app/policy/lib/kafka-clients-3.9.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.5.18.jar /opt/app/policy/lib/logback-core-1.5.18.jar /opt/app/policy/lib/lombok-1.18.38.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.43.0.jar /opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar /opt/app/policy/lib/opentelemetry-context-1.43.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.6.0.jar /opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.5.jar /opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.8.jar /opt/app/policy/lib/slf4j-api-2.0.17.jar /opt/app/policy/lib/snakeyaml-2.4.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.29.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + xargs -I X printf ':%s' X policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + export 'M2_HOME=/home/policy/.m2' policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | [2025-06-14T07:47:00.808+00:00|INFO|LifecycleFsm|main] The mandatory Policy Types are []. Compliance is true policy-drools-pdp | [2025-06-14T07:47:00.811+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [org.onap.policy.drools.lifecycle.LifecycleFeature@2235eaab] policy-drools-pdp | [2025-06-14T07:47:00.820+00:00|INFO|PolicyContainer|main] PolicyContainer.main: configDir=config policy-drools-pdp | [2025-06-14T07:47:00.821+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-14T07:47:00.830+00:00|INFO|IndexedKafkaTopicSourceFactory|main] IndexedKafkaTopicSourceFactory []: no topic for KAFKA Source policy-drools-pdp | [2025-06-14T07:47:00.831+00:00|INFO|IndexedKafkaTopicSinkFactory|main] IndexedKafkaTopicSinkFactory []: no topic for KAFKA Sink policy-drools-pdp | [2025-06-14T07:47:01.270+00:00|INFO|PolicyEngineManager|main] lock manager is org.onap.policy.drools.system.internal.SimpleLockManager@376a312c policy-drools-pdp | [2025-06-14T07:47:01.280+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START policy-drools-pdp | [2025-06-14T07:47:01.293+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-drools-pdp | [2025-06-14T07:47:01.294+00:00|INFO|JettyServletServer|CONFIG-9696] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-drools-pdp | [2025-06-14T07:47:01.302+00:00|INFO|Server|CONFIG-9696] jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 policy-drools-pdp | [2025-06-14T07:47:01.332+00:00|INFO|DefaultSessionIdManager|CONFIG-9696] Session workerName=node0 policy-drools-pdp | [2025-06-14T07:47:01.340+00:00|INFO|ContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 14, 2025 7:47:02 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | [2025-06-14T07:47:02.157+00:00|INFO|GsonMessageBodyHandler|CONFIG-9696] Using GSON for REST calls policy-drools-pdp | [2025-06-14T07:47:02.158+00:00|INFO|JacksonHandler|CONFIG-9696] Using GSON with Jackson behaviors for REST calls policy-drools-pdp | [2025-06-14T07:47:02.160+00:00|INFO|YamlMessageBodyHandler|CONFIG-9696] Accepting YAML for REST calls policy-drools-pdp | [2025-06-14T07:47:02.325+00:00|INFO|ServletContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | [2025-06-14T07:47:02.334+00:00|INFO|AbstractConnector|CONFIG-9696] Started CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696} policy-drools-pdp | [2025-06-14T07:47:02.336+00:00|INFO|Server|CONFIG-9696] Started oejs.Server@3276732{STARTING}[12.0.21,sto=0] @2672ms policy-drools-pdp | [2025-06-14T07:47:02.336+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 8958 ms. policy-drools-pdp | [2025-06-14T07:47:02.345+00:00|INFO|LifecycleFsm|main] lifecycle event: start engine policy-drools-pdp | [2025-06-14T07:47:02.494+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-1 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = ca5c4804-cc38-4c67-847a-b6b5f5acab5c policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-14T07:47:02.531+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-14T07:47:02.605+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-14T07:47:02.605+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-14T07:47:02.606+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887222604 policy-drools-pdp | [2025-06-14T07:47:02.608+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-1, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-14T07:47:02.608+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ca5c4804-cc38-4c67-847a-b6b5f5acab5c, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e6308a9 policy-drools-pdp | [2025-06-14T07:47:02.624+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ca5c4804-cc38-4c67-847a-b6b5f5acab5c, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-14T07:47:02.625+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = ca5c4804-cc38-4c67-847a-b6b5f5acab5c policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-14T07:47:02.625+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-14T07:47:02.635+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-14T07:47:02.635+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-14T07:47:02.636+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887222635 policy-drools-pdp | [2025-06-14T07:47:02.636+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-14T07:47:02.637+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ca5c4804-cc38-4c67-847a-b6b5f5acab5c, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-14T07:47:02.641+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fcc0460f-4f34-4223-9696-503bafeb9ed4, alive=false, publisher=null]]: starting policy-drools-pdp | [2025-06-14T07:47:02.655+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-drools-pdp | acks = -1 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | batch.size = 16384 policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | buffer.memory = 33554432 policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = producer-1 policy-drools-pdp | compression.gzip.level = -1 policy-drools-pdp | compression.lz4.level = 9 policy-drools-pdp | compression.type = none policy-drools-pdp | compression.zstd.level = 3 policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | delivery.timeout.ms = 120000 policy-drools-pdp | enable.idempotence = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | linger.ms = 0 policy-drools-pdp | max.block.ms = 60000 policy-drools-pdp | max.in.flight.requests.per.connection = 5 policy-drools-pdp | max.request.size = 1048576 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.max.idle.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partitioner.adaptive.partitioning.enable = true policy-drools-pdp | partitioner.availability.timeout.ms = 0 policy-drools-pdp | partitioner.class = null policy-drools-pdp | partitioner.ignore.keys = false policy-drools-pdp | receive.buffer.bytes = 32768 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retries = 2147483647 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | transaction.timeout.ms = 60000 policy-drools-pdp | transactional.id = null policy-drools-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | policy-drools-pdp | [2025-06-14T07:47:02.656+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-14T07:47:02.668+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-drools-pdp | [2025-06-14T07:47:02.687+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-14T07:47:02.688+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-14T07:47:02.688+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887222687 policy-drools-pdp | [2025-06-14T07:47:02.688+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fcc0460f-4f34-4223-9696-503bafeb9ed4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-drools-pdp | [2025-06-14T07:47:02.691+00:00|INFO|LifecycleStateDefault|main] LifecycleStateTerminated(): state-change from TERMINATED to PASSIVE policy-drools-pdp | [2025-06-14T07:47:02.691+00:00|INFO|LifecycleFsm|pool-2-thread-1] lifecycle event: status policy-drools-pdp | [2025-06-14T07:47:02.692+00:00|INFO|MdcTransactionImpl|main] policy-drools-pdp | [2025-06-14T07:47:02.696+00:00|INFO|Main|main] Started policy-drools-pdp service successfully. policy-drools-pdp | [2025-06-14T07:47:02.712+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-14T07:47:03.090+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-drools-pdp | [2025-06-14T07:47:03.090+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-drools-pdp | [2025-06-14T07:47:03.094+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-drools-pdp | [2025-06-14T07:47:03.100+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-drools-pdp | [2025-06-14T07:47:03.140+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] (Re-)joining group policy-drools-pdp | [2025-06-14T07:47:03.162+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Request joining group due to: need to re-join with the given member-id: consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444 policy-drools-pdp | [2025-06-14T07:47:03.162+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] (Re-)joining group policy-drools-pdp | [2025-06-14T07:47:06.170+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444', protocol='range'} policy-drools-pdp | [2025-06-14T07:47:06.181+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Finished assignment for group at generation 1: {consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444=Assignment(partitions=[policy-pdp-pap-0])} policy-drools-pdp | [2025-06-14T07:47:06.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2-87a8c80c-f113-4458-9fab-9ef1b5d6d444', protocol='range'} policy-drools-pdp | [2025-06-14T07:47:06.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-drools-pdp | [2025-06-14T07:47:06.192+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Adding newly assigned partitions: policy-pdp-pap-0 policy-drools-pdp | [2025-06-14T07:47:06.200+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Found no committed offset for partition policy-pdp-pap-0 policy-drools-pdp | [2025-06-14T07:47:06.211+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ca5c4804-cc38-4c67-847a-b6b5f5acab5c-2, groupId=ca5c4804-cc38-4c67-847a-b6b5f5acab5c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-14T07:46:49.708+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 61 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-14T07:46:49.709+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-14T07:46:51.330+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-14T07:46:51.436+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 91 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-14T07:46:52.535+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-14T07:46:52.549+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-14T07:46:52.552+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-14T07:46:52.552+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-14T07:46:52.620+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-14T07:46:52.621+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2844 ms policy-pap | [2025-06-14T07:46:53.089+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-14T07:46:53.174+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-14T07:46:53.228+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-14T07:46:53.680+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-14T07:46:53.726+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-14T07:46:53.948+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@53a16dd6 policy-pap | [2025-06-14T07:46:53.950+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-14T07:46:54.050+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-14T07:46:56.129+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-14T07:46:56.133+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-14T07:46:57.464+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = f3ce3b90-f339-4757-bafd-fc536e7a0824 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T07:46:57.529+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:57.690+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:57.691+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:57.691+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887217688 policy-pap | [2025-06-14T07:46:57.693+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-1, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T07:46:57.694+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T07:46:57.695+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:57.704+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:57.704+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:57.704+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887217703 policy-pap | [2025-06-14T07:46:57.704+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T07:46:58.084+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-14T07:46:58.233+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-14T07:46:58.315+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-14T07:46:58.553+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-14T07:46:59.349+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-14T07:46:59.479+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-14T07:46:59.509+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-14T07:46:59.532+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-14T07:46:59.533+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-14T07:46:59.534+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-14T07:46:59.534+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-14T07:46:59.534+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-14T07:46:59.535+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-14T07:46:59.535+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-14T07:46:59.538+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f3ce3b90-f339-4757-bafd-fc536e7a0824, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@72cd5f41 policy-pap | [2025-06-14T07:46:59.550+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f3ce3b90-f339-4757-bafd-fc536e7a0824, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T07:46:59.550+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = f3ce3b90-f339-4757-bafd-fc536e7a0824 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T07:46:59.551+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:59.558+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:59.558+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:59.558+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887219558 policy-pap | [2025-06-14T07:46:59.559+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T07:46:59.560+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-14T07:46:59.560+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=05848b12-237d-4d78-8d55-e15a4c2c1a84, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7a77ff45 policy-pap | [2025-06-14T07:46:59.560+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=05848b12-237d-4d78-8d55-e15a4c2c1a84, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T07:46:59.560+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T07:46:59.561+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:59.566+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:59.566+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:59.566+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887219566 policy-pap | [2025-06-14T07:46:59.567+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T07:46:59.569+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-14T07:46:59.569+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=05848b12-237d-4d78-8d55-e15a4c2c1a84, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T07:46:59.569+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f3ce3b90-f339-4757-bafd-fc536e7a0824, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T07:46:59.569+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be79d3dd-f11a-4cc6-8a45-c55b2a722ed6, alive=false, publisher=null]]: starting policy-pap | [2025-06-14T07:46:59.586+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-14T07:46:59.588+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:59.603+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-14T07:46:59.622+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:59.622+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:59.623+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887219622 policy-pap | [2025-06-14T07:46:59.623+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be79d3dd-f11a-4cc6-8a45-c55b2a722ed6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-14T07:46:59.623+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=21c48b31-c8e1-46c6-b18c-889abadaee54, alive=false, publisher=null]]: starting policy-pap | [2025-06-14T07:46:59.623+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-14T07:46:59.624+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T07:46:59.624+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-14T07:46:59.628+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T07:46:59.628+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T07:46:59.628+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749887219628 policy-pap | [2025-06-14T07:46:59.629+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=21c48b31-c8e1-46c6-b18c-889abadaee54, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-14T07:46:59.629+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-14T07:46:59.629+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-14T07:46:59.633+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-14T07:46:59.634+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-14T07:46:59.635+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-14T07:46:59.636+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-14T07:46:59.637+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-14T07:46:59.637+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-14T07:46:59.639+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-14T07:46:59.639+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-14T07:46:59.643+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-14T07:46:59.644+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.782 seconds (process running for 11.387) policy-pap | [2025-06-14T07:47:00.166+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-pap | [2025-06-14T07:47:00.167+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-pap | [2025-06-14T07:47:00.166+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-14T07:47:00.167+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-pap | [2025-06-14T07:47:00.208+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-14T07:47:00.212+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-14T07:47:00.229+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T07:47:00.230+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 39Nu5_8lRbaMBkBXvQZwoQ policy-pap | [2025-06-14T07:47:00.379+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-14T07:47:00.399+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T07:47:01.791+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-14T07:47:01.801+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] (Re-)joining group policy-pap | [2025-06-14T07:47:01.829+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-14T07:47:01.834+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Request joining group due to: need to re-join with the given member-id: consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9 policy-pap | [2025-06-14T07:47:01.835+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] (Re-)joining group policy-pap | [2025-06-14T07:47:01.835+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-14T07:47:01.839+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37 policy-pap | [2025-06-14T07:47:01.840+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-14T07:47:04.861+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37', protocol='range'} policy-pap | [2025-06-14T07:47:04.864+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Successfully joined group with generation Generation{generationId=1, memberId='consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9', protocol='range'} policy-pap | [2025-06-14T07:47:04.873+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Finished assignment for group at generation 1: {consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-14T07:47:04.873+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-14T07:47:04.915+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Successfully synced group in generation Generation{generationId=1, memberId='consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3-85b5d546-c1b8-4761-ac29-dd2f1b844ac9', protocol='range'} policy-pap | [2025-06-14T07:47:04.916+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-14T07:47:04.916+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-37f9e8a9-0240-43e9-8619-82db75a30e37', protocol='range'} policy-pap | [2025-06-14T07:47:04.917+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-14T07:47:04.922+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-14T07:47:04.922+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-14T07:47:04.940+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-14T07:47:04.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-14T07:47:04.961+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f3ce3b90-f339-4757-bafd-fc536e7a0824-3, groupId=f3ce3b90-f339-4757-bafd-fc536e7a0824] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-14T07:47:04.961+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-14T07:47:41.617+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-14T07:47:41.618+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-14T07:47:41.621+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-14 07:46:19.279 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-14 07:46:19.281 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-14 07:46:19.289 UTC [51] LOG: database system was shut down at 2025-06-14 07:46:18 UTC postgres | 2025-06-14 07:46:19.296 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-14 07:46:20.855 UTC [48] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-14 07:46:20.857 UTC [48] LOG: aborting any active transactions postgres | 2025-06-14 07:46:20.860 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-14 07:46:20.863 UTC [49] LOG: shutting down postgres | 2025-06-14 07:46:20.865 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | .2025-06-14 07:46:21.937 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.705 s, sync=0.358 s, total=1.075 s; sync files=1788, longest=0.027 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-14 07:46:21.951 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-14 07:46:21.984 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-14 07:46:21.984 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-14 07:46:21.984 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-14 07:46:21.991 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-14 07:46:22.001 UTC [101] LOG: database system was shut down at 2025-06-14 07:46:21 UTC postgres | 2025-06-14 07:46:22.006 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-14T07:46:17.014Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-14T07:46:17.014Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-14T07:46:17.014Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-14T07:46:17.018Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-14T07:46:17.021Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-14T07:46:17.022Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-14T07:46:17.029Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-14T07:46:17.029Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-14T07:46:17.031Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-14T07:46:17.031Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=3.27µs prometheus | time=2025-06-14T07:46:17.032Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-14T07:46:17.033Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=865.748µs prometheus | time=2025-06-14T07:46:17.033Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=57.371µs wal_replay_duration=921.389µs wbl_replay_duration=290ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=3.27µs total_replay_duration=1.267292ms prometheus | time=2025-06-14T07:46:17.036Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-14T07:46:17.036Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-14T07:46:17.036Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-14T07:46:17.038Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-14T07:46:17.038Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.66µs remote_storage=2.51µs web_handler=1.02µs query_engine=1.38µs scrape=431.004µs scrape_sd=309.163µs notify=198.782µs notify_sd=22.37µs rules=3.03µs tracing=7.88µs filename=/etc/prometheus/prometheus.yml totalDuration=1.797527ms prometheus | time=2025-06-14T07:46:17.038Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-14T07:46:17.038Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-14 07:46:22,730] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,732] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,732] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,732] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,732] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,734] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 07:46:22,734] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 07:46:22,734] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 07:46:22,734] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-14 07:46:22,735] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-14 07:46:22,735] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,735] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,736] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,736] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,736] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 07:46:22,736] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-14 07:46:22,746] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-14 07:46:22,750] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-14 07:46:22,750] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-14 07:46:22,752] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 07:46:22,760] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,760] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,761] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,762] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,763] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-14 07:46:22,764] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,764] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,765] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-14 07:46:22,765] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,766] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 07:46:22,768] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,768] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,768] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-14 07:46:22,768] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-14 07:46:22,768] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:22,807] INFO Logging initialized @492ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-14 07:46:22,890] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 07:46:22,890] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 07:46:22,913] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-14 07:46:22,947] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 07:46:22,947] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 07:46:22,948] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 07:46:22,953] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-14 07:46:22,964] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 07:46:22,974] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-14 07:46:22,974] INFO Started @669ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-14 07:46:22,974] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-14 07:46:22,978] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-14 07:46:22,979] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-14 07:46:22,980] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-14 07:46:22,981] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-14 07:46:22,995] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-14 07:46:22,995] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-14 07:46:22,995] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 07:46:22,995] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 07:46:22,999] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-14 07:46:22,999] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 07:46:23,002] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 07:46:23,003] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 07:46:23,003] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 07:46:23,012] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-14 07:46:23,015] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-14 07:46:23,026] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-14 07:46:23,026] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-14 07:46:24,248] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-drools-pdp Stopping Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2104 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins752362572283234025.sh ---> sysstat.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins10474826447407155350.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins16022645659188715202.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-28xT from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-28xT/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins8524567548060556757.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/config3615513089971316766tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins12739309619142404071.sh ---> create-netrc.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins17219294309490946121.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-28xT from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-28xT/bin to PATH [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins3380188824518945149.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins1793290867843033670.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-28xT from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-28xT/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash -l /tmp/jenkins7466174158608732703.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-28xT from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-28xT/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-drools-pdp/2033 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21077 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 880 23665 0 7620 30831 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:63:54:41 brd ff:ff:ff:ff:ff:ff inet 10.30.107.1/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86054sec preferred_lft 86054sec inet6 fe80::f816:3eff:fe63:5441/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:f7:64:e4:a1 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:f7ff:fe64:e4a1/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21077) 06/14/25 _x86_64_ (8 CPU) 07:43:50 LINUX RESTART (8 CPU) 07:44:01 tps rtps wtps bread/s bwrtn/s 07:45:02 390.50 73.62 316.88 5303.38 107627.13 07:46:01 418.71 20.71 398.00 2323.13 197138.79 07:47:01 432.62 2.63 429.99 402.93 103925.36 07:48:01 181.24 0.20 181.04 21.86 28365.67 07:49:01 100.00 1.38 98.62 67.06 6812.73 Average: 304.24 19.71 284.53 1621.29 88412.09 07:44:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:45:02 30051904 31610832 2887316 8.77 67000 1801412 1504192 4.43 941384 1651624 184832 07:46:01 24917568 31637756 8021652 24.35 149060 6642676 1678708 4.94 1014708 6419220 1673760 07:47:01 22690416 29750232 10248804 31.11 165564 6966208 8439132 24.83 3115532 6460256 2324 07:48:01 21632312 29540916 11306908 34.33 200232 7709272 8624736 25.38 3455800 7098816 84164 07:49:01 24179052 31511824 8760168 26.59 207512 7141268 1657480 4.88 1519244 6584820 28652 Average: 24694250 30810312 8244970 25.03 157874 6052167 4380850 12.89 2009334 5642947 394746 07:44:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 07:45:02 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 07:45:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:45:02 ens3 564.27 356.11 1661.56 82.44 0.00 0.00 0.00 0.00 07:46:01 lo 13.56 13.56 1.25 1.25 0.00 0.00 0.00 0.00 07:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:46:01 ens3 1385.87 798.46 38776.58 66.55 0.00 0.00 0.00 0.00 07:47:01 lo 1.33 1.33 0.11 0.11 0.00 0.00 0.00 0.00 07:47:01 vethcbfd5b0 32.87 40.44 2.57 310.55 0.00 0.00 0.00 0.03 07:47:01 vethcbf402a 4.05 5.30 0.71 0.81 0.00 0.00 0.00 0.00 07:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:48:01 lo 2.57 2.57 0.21 0.21 0.00 0.00 0.00 0.00 07:48:01 vethcbfd5b0 0.50 0.35 0.03 0.02 0.00 0.00 0.00 0.00 07:48:01 vethcbf402a 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 07:48:01 docker0 149.66 207.25 9.40 1350.43 0.00 0.00 0.00 0.00 07:49:01 lo 2.53 2.53 0.24 0.24 0.00 0.00 0.00 0.00 07:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:49:01 ens3 2296.72 1401.02 42381.97 193.63 0.00 0.00 0.00 0.00 Average: lo 4.30 4.30 0.39 0.39 0.00 0.00 0.00 0.00 Average: docker0 30.03 41.59 1.89 270.98 0.00 0.00 0.00 0.00 Average: ens3 457.76 280.12 8503.22 38.75 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21077) 06/14/25 _x86_64_ (8 CPU) 07:43:50 LINUX RESTART (8 CPU) 07:44:01 CPU %user %nice %system %iowait %steal %idle 07:45:02 all 9.27 0.00 1.40 3.63 0.03 85.66 07:45:02 0 5.54 0.00 0.93 5.71 0.03 87.79 07:45:02 1 9.72 0.00 2.72 0.42 0.03 87.11 07:45:02 2 11.94 0.00 0.88 0.37 0.03 86.77 07:45:02 3 4.84 0.00 1.71 15.95 0.03 77.46 07:45:02 4 25.72 0.00 2.19 2.16 0.07 69.86 07:45:02 5 8.03 0.00 1.02 0.42 0.03 90.50 07:45:02 6 4.55 0.00 1.00 0.27 0.03 94.15 07:45:02 7 3.79 0.00 0.70 3.84 0.02 91.65 07:46:01 all 19.22 0.00 8.45 8.02 0.08 64.22 07:46:01 0 15.26 0.00 9.16 10.70 0.09 64.79 07:46:01 1 19.19 0.00 8.83 2.33 0.07 69.59 07:46:01 2 33.99 0.00 9.42 7.60 0.10 48.88 07:46:01 3 15.70 0.00 7.89 8.97 0.09 67.36 07:46:01 4 22.15 0.00 8.50 11.14 0.07 58.13 07:46:01 5 15.01 0.00 8.63 18.07 0.07 58.21 07:46:01 6 16.01 0.00 7.52 2.16 0.07 74.24 07:46:01 7 16.48 0.00 7.64 3.19 0.09 72.61 07:47:01 all 27.65 0.00 4.45 6.40 0.08 61.43 07:47:01 0 27.29 0.00 4.27 3.40 0.07 64.98 07:47:01 1 38.58 0.00 5.24 2.42 0.08 53.68 07:47:01 2 30.30 0.00 5.22 15.59 0.10 48.79 07:47:01 3 26.22 0.00 3.79 2.78 0.07 67.14 07:47:01 4 24.93 0.00 4.48 8.43 0.10 62.06 07:47:01 5 31.74 0.00 5.60 2.62 0.08 59.96 07:47:01 6 14.94 0.00 3.54 2.98 0.05 78.49 07:47:01 7 27.21 0.00 3.50 12.94 0.08 56.25 07:48:01 all 9.14 0.00 2.08 1.35 0.06 87.38 07:48:01 0 10.06 0.00 2.98 2.26 0.07 84.64 07:48:01 1 9.98 0.00 2.43 0.07 0.05 87.47 07:48:01 2 10.56 0.00 1.66 0.25 0.05 87.49 07:48:01 3 7.05 0.00 1.60 2.54 0.05 88.76 07:48:01 4 7.48 0.00 2.13 2.43 0.07 87.90 07:48:01 5 6.99 0.00 2.16 2.82 0.07 87.96 07:48:01 6 14.92 0.00 2.26 0.40 0.07 82.36 07:48:01 7 6.01 0.00 1.39 0.07 0.05 92.48 07:49:01 all 4.96 0.00 1.55 0.40 0.04 93.05 07:49:01 0 6.55 0.00 2.75 1.22 0.05 89.43 07:49:01 1 2.69 0.00 1.17 0.73 0.03 95.37 07:49:01 2 2.82 0.00 1.19 0.02 0.03 95.94 07:49:01 3 2.94 0.00 0.97 0.08 0.03 95.97 07:49:01 4 3.27 0.00 1.63 0.05 0.03 95.02 07:49:01 5 16.60 0.00 2.18 0.82 0.03 80.37 07:49:01 6 2.67 0.00 1.10 0.28 0.03 95.90 07:49:01 7 2.17 0.00 1.44 0.03 0.03 96.32 Average: all 14.01 0.00 3.56 3.94 0.06 78.43 Average: 0 12.92 0.00 3.99 4.63 0.06 78.39 Average: 1 15.99 0.00 4.06 1.19 0.05 78.71 Average: 2 17.82 0.00 3.64 4.74 0.06 73.73 Average: 3 11.33 0.00 3.17 6.05 0.05 79.40 Average: 4 16.68 0.00 3.76 4.81 0.07 74.68 Average: 5 15.66 0.00 3.89 4.89 0.06 75.50 Average: 6 10.60 0.00 3.06 1.21 0.05 85.08 Average: 7 11.10 0.00 2.91 4.01 0.05 81.92