Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21755 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-9lkHwA5l04Jf/agent.2111 SSH_AGENT_PID=2113 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_8192953974735884740.key (/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_8192953974735884740.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision ed38a50541249063daf2cfb00b312fb173adeace (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f ed38a50541249063daf2cfb00b312fb173adeace # timeout=30 Commit message: "Remove python from the java app docker images" > git rev-list --no-walk 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins1164235929673517056.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-TEuX lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-TEuX/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-TEuX/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.37 botocore==1.38.37 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh /tmp/jenkins12499121628665244127.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh -xe /tmp/jenkins2148455474830914329.sh + /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 77.3M 0 --:--:-- --:--:-- --:--:-- 77.3M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp using postgres + Grafana/Prometheus kafka Pulling drools-pdp Pulling zookeeper Pulling postgres Pulling pap Pulling policy-db-migrator Pulling api Pulling prometheus Pulling grafana Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 0d92cad902ba Waiting eb7cda286a15 Waiting dcc0c3b2850c Waiting 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete da9db072f522 Downloading [> ] 48.06kB/3.624MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Downloading [> ] 48.06kB/3.624MB e5d7009d9e55 Waiting 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer eca0188f477e Waiting f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer e444bcd4d577 Waiting 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer eabd8714fec9 Waiting 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 71a9f6a9ab4d Waiting c955f6e31a04 Waiting 10f05dd8b1db Waiting da3ed5db7103 Waiting 41dac8b43ba6 Waiting f3a82e9f1761 Waiting f963a77d2726 Waiting 79161a3f5362 Waiting da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer 7ce9630189bb Pulling fs layer 2d7f854c01cf Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 8e665a4a2af9 Pulling fs layer 219d845251ba Pulling fs layer 4ba79830ebce Waiting 7ce9630189bb Waiting d223479d7367 Waiting 219d845251ba Waiting 2d7f854c01cf Waiting 8e665a4a2af9 Waiting 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 1e017ebebdbd Waiting b0e0ef7895f4 Waiting 55f2b468da67 Waiting c0c90eeb8aca Waiting e040ea11fa10 Waiting 82bfc142787e Waiting 5cfb27c10ea5 Waiting 46baca71a4ef Waiting 40a5eed61bb0 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer b176d7edde70 Pulling fs layer f18232174bc9 Waiting e60d9caeb0b8 Waiting f61a19743345 Waiting 8af57d8c9f49 Waiting c53a11b7c6fc Waiting e5d7009d9e55 Downloading [==================================================>] 295B/295B e032d0a5e409 Waiting c49e0ee60bfb Waiting 384497dbce3b Waiting 055b9255fa03 Waiting e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete b176d7edde70 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 2d429b9e73a6 Waiting 46eab5b44a35 Waiting c4d302cc468d Waiting e73cb4a42719 Waiting 01e0882c90d9 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 531ee2cf3c0c Waiting 13ff0988aaea Waiting ed54a7dee1d8 Waiting 4b82842ab819 Waiting 12c5c803443f Waiting e27c75a98748 Waiting 7e568a0dc8fb Waiting 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 1617e25568b2 Waiting 408012a7b118 Waiting 44986281b8b9 Waiting 6ac0e4adf315 Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting f3b09c502777 Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete da9db072f522 Extracting [=========> ] 720.9kB/3.624MB da9db072f522 Extracting [=========> ] 720.9kB/3.624MB da9db072f522 Extracting [=========> ] 720.9kB/3.624MB dcc0c3b2850c Downloading [======> ] 9.19MB/76.12MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 96e38c8865ba Downloading [================> ] 23.79MB/71.91MB 96e38c8865ba Downloading [================> ] 23.79MB/71.91MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB dcc0c3b2850c Downloading [============> ] 19.46MB/76.12MB c124ba1a8b26 Downloading [===> ] 5.946MB/91.87MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB dcc0c3b2850c Downloading [===================> ] 29.2MB/76.12MB c124ba1a8b26 Downloading [======> ] 11.35MB/91.87MB 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB dcc0c3b2850c Downloading [===========================> ] 42.17MB/76.12MB da9db072f522 Already exists 110a13bd01fb Pulling fs layer 12cf1ed9c784 Pulling fs layer d4108afce2f7 Pulling fs layer 07255172bfd8 Pulling fs layer 22c948928e79 Pulling fs layer e92d65bf8445 Pulling fs layer 7910fddefabc Pulling fs layer d4108afce2f7 Waiting 110a13bd01fb Waiting 12cf1ed9c784 Waiting 07255172bfd8 Waiting 22c948928e79 Waiting 7910fddefabc Waiting e92d65bf8445 Waiting c124ba1a8b26 Downloading [==========> ] 18.92MB/91.87MB 96e38c8865ba Downloading [=================================================> ] 70.83MB/71.91MB 96e38c8865ba Downloading [=================================================> ] 70.83MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete dcc0c3b2850c Downloading [====================================> ] 55.69MB/76.12MB eca0188f477e Downloading [> ] 375.7kB/37.17MB c124ba1a8b26 Downloading [================> ] 30.28MB/91.87MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [==============================================> ] 71.37MB/76.12MB eca0188f477e Downloading [========> ] 6.405MB/37.17MB c124ba1a8b26 Downloading [==========================> ] 48.12MB/91.87MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB eca0188f477e Downloading [====================> ] 15.07MB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB c124ba1a8b26 Downloading [===================================> ] 64.88MB/91.87MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB eca0188f477e Downloading [==================================> ] 26MB/37.17MB eabd8714fec9 Downloading [> ] 3.243MB/375MB c124ba1a8b26 Downloading [===========================================> ] 80.02MB/91.87MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete eca0188f477e Verifying Checksum eca0188f477e Download complete 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eabd8714fec9 Downloading [=> ] 11.35MB/375MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Download complete f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB eca0188f477e Extracting [> ] 393.2kB/37.17MB eabd8714fec9 Downloading [===> ] 27.03MB/375MB 8f10199ed94b Downloading [==========================> ] 4.619MB/8.768MB f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB eca0188f477e Extracting [======> ] 4.719MB/37.17MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 96e38c8865ba Extracting [=================> ] 24.51MB/71.91MB 96e38c8865ba Extracting [=================> ] 24.51MB/71.91MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete eabd8714fec9 Downloading [=====> ] 44.33MB/375MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete f3a82e9f1761 Downloading [===========> ] 10.09MB/44.41MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eca0188f477e Extracting [============> ] 9.044MB/37.17MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB eabd8714fec9 Downloading [========> ] 61.09MB/375MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete f3a82e9f1761 Downloading [======================> ] 20.18MB/44.41MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB eca0188f477e Extracting [===============> ] 11.8MB/37.17MB eabd8714fec9 Downloading [==========> ] 77.32MB/375MB f3a82e9f1761 Downloading [=====================================> ] 33.49MB/44.41MB da3ed5db7103 Downloading [=> ] 2.702MB/127.4MB 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB eabd8714fec9 Downloading [============> ] 93.54MB/375MB f3a82e9f1761 Download complete c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum da3ed5db7103 Downloading [===> ] 8.65MB/127.4MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB eca0188f477e Extracting [===========================> ] 20.45MB/37.17MB 4ba79830ebce Downloading [> ] 539.6kB/166.8MB eabd8714fec9 Downloading [==============> ] 108.7MB/375MB da3ed5db7103 Downloading [======> ] 15.68MB/127.4MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 4ba79830ebce Downloading [> ] 3.243MB/166.8MB eca0188f477e Extracting [=================================> ] 25.17MB/37.17MB eabd8714fec9 Downloading [================> ] 125.4MB/375MB da3ed5db7103 Downloading [==========> ] 26.49MB/127.4MB 4ba79830ebce Downloading [==> ] 7.568MB/166.8MB eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB eabd8714fec9 Downloading [==================> ] 139.5MB/375MB da3ed5db7103 Downloading [===============> ] 38.39MB/127.4MB 4ba79830ebce Downloading [===> ] 11.35MB/166.8MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB eabd8714fec9 Downloading [====================> ] 153.5MB/375MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB da3ed5db7103 Downloading [====================> ] 52.98MB/127.4MB 4ba79830ebce Downloading [=====> ] 16.76MB/166.8MB eabd8714fec9 Downloading [======================> ] 168.7MB/375MB 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB da3ed5db7103 Downloading [==========================> ] 68.12MB/127.4MB 4ba79830ebce Downloading [======> ] 22.17MB/166.8MB eabd8714fec9 Downloading [========================> ] 183.3MB/375MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B da3ed5db7103 Downloading [================================> ] 82.72MB/127.4MB eabd8714fec9 Downloading [==========================> ] 199MB/375MB 4ba79830ebce Downloading [========> ] 28.65MB/166.8MB 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB da3ed5db7103 Downloading [======================================> ] 98.94MB/127.4MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB e444bcd4d577 Pull complete eabd8714fec9 Downloading [============================> ] 215.2MB/375MB 4ba79830ebce Downloading [==========> ] 34.06MB/166.8MB da3ed5db7103 Downloading [============================================> ] 114.1MB/127.4MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B eabd8714fec9 Downloading [==============================> ] 226MB/375MB 4ba79830ebce Downloading [===========> ] 40.01MB/166.8MB da3ed5db7103 Download complete eabd8714fec9 Downloading [================================> ] 242.2MB/375MB d223479d7367 Downloading [> ] 80.82kB/6.742MB 4ba79830ebce Downloading [===============> ] 52.44MB/166.8MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB eabd8714fec9 Downloading [==================================> ] 257.9MB/375MB d223479d7367 Downloading [=========================================> ] 5.569MB/6.742MB 4ba79830ebce Downloading [====================> ] 68.66MB/166.8MB d223479d7367 Verifying Checksum d223479d7367 Download complete 1ec5fb03eaee Pull complete 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 7ce9630189bb Downloading [> ] 326.6kB/31.04MB eabd8714fec9 Downloading [====================================> ] 272.5MB/375MB 4ba79830ebce Downloading [========================> ] 83.26MB/166.8MB d3165a332ae3 Pull complete 0d92cad902ba Pull complete 7ce9630189bb Downloading [==> ] 1.572MB/31.04MB eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB 4ba79830ebce Downloading [=============================> ] 98.4MB/166.8MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB eabd8714fec9 Downloading [========================================> ] 304.4MB/375MB 7ce9630189bb Downloading [=====> ] 3.44MB/31.04MB 4ba79830ebce Downloading [=================================> ] 112.5MB/166.8MB c124ba1a8b26 Extracting [====> ] 8.913MB/91.87MB dcc0c3b2850c Extracting [=======> ] 11.14MB/76.12MB eabd8714fec9 Downloading [==========================================> ] 319.5MB/375MB 4ba79830ebce Downloading [======================================> ] 127.6MB/166.8MB 7ce9630189bb Downloading [===========> ] 7.175MB/31.04MB c124ba1a8b26 Extracting [=========> ] 17.27MB/91.87MB dcc0c3b2850c Extracting [===============> ] 23.95MB/76.12MB eabd8714fec9 Downloading [============================================> ] 335.8MB/375MB 4ba79830ebce Downloading [===========================================> ] 145.4MB/166.8MB 7ce9630189bb Downloading [=============================> ] 18.07MB/31.04MB c124ba1a8b26 Extracting [===============> ] 27.85MB/91.87MB dcc0c3b2850c Extracting [=======================> ] 35.65MB/76.12MB eabd8714fec9 Downloading [==============================================> ] 350.4MB/375MB 4ba79830ebce Downloading [================================================> ] 161.1MB/166.8MB 7ce9630189bb Downloading [=================================================> ] 30.83MB/31.04MB 7ce9630189bb Verifying Checksum 7ce9630189bb Download complete 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete 2d7f854c01cf Downloading [==================================================>] 372B/372B 2d7f854c01cf Verifying Checksum 2d7f854c01cf Download complete c124ba1a8b26 Extracting [=====================> ] 39.55MB/91.87MB 8e665a4a2af9 Downloading [> ] 539.6kB/107.2MB 219d845251ba Downloading [> ] 539.6kB/108.2MB dcc0c3b2850c Extracting [==============================> ] 45.68MB/76.12MB eabd8714fec9 Downloading [=================================================> ] 367.7MB/375MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB c124ba1a8b26 Extracting [========================> ] 45.12MB/91.87MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete dcc0c3b2850c Extracting [====================================> ] 55.71MB/76.12MB 219d845251ba Downloading [===> ] 7.568MB/108.2MB 8e665a4a2af9 Downloading [==> ] 5.406MB/107.2MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 4ba79830ebce Extracting [=> ] 3.899MB/166.8MB c124ba1a8b26 Extracting [==============================> ] 56.82MB/91.87MB dcc0c3b2850c Extracting [===========================================> ] 66.29MB/76.12MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 219d845251ba Downloading [========> ] 19.46MB/108.2MB 8e665a4a2af9 Downloading [=======> ] 16.22MB/107.2MB 1e017ebebdbd Downloading [=====> ] 3.767MB/37.19MB 4ba79830ebce Extracting [===> ] 11.14MB/166.8MB c124ba1a8b26 Extracting [===================================> ] 65.18MB/91.87MB dcc0c3b2850c Extracting [=================================================> ] 75.76MB/76.12MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB eabd8714fec9 Extracting [=> ] 12.26MB/375MB 8e665a4a2af9 Downloading [============> ] 27.03MB/107.2MB 219d845251ba Downloading [==============> ] 31.9MB/108.2MB 1e017ebebdbd Downloading [==========> ] 7.536MB/37.19MB 4ba79830ebce Extracting [=====> ] 18.94MB/166.8MB dcc0c3b2850c Pull complete c124ba1a8b26 Extracting [========================================> ] 74.65MB/91.87MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 8e665a4a2af9 Downloading [==================> ] 40.55MB/107.2MB eabd8714fec9 Extracting [==> ] 20.05MB/375MB 219d845251ba Downloading [====================> ] 44.87MB/108.2MB 1e017ebebdbd Downloading [==============> ] 10.93MB/37.19MB 4ba79830ebce Extracting [========> ] 27.85MB/166.8MB c124ba1a8b26 Extracting [==============================================> ] 85.23MB/91.87MB 8e665a4a2af9 Downloading [=========================> ] 54.07MB/107.2MB 219d845251ba Downloading [==========================> ] 57.85MB/108.2MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB eb7cda286a15 Pull complete c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB api Pulled 1e017ebebdbd Downloading [===================> ] 14.32MB/37.19MB 4ba79830ebce Extracting [===========> ] 37.32MB/166.8MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 8e665a4a2af9 Downloading [================================> ] 68.66MB/107.2MB 219d845251ba Downloading [=================================> ] 71.91MB/108.2MB eabd8714fec9 Extracting [===> ] 24.51MB/375MB 1e017ebebdbd Downloading [=======================> ] 17.33MB/37.19MB 4ba79830ebce Extracting [=============> ] 46.24MB/166.8MB 8e665a4a2af9 Downloading [========================================> ] 87.05MB/107.2MB 219d845251ba Downloading [=========================================> ] 89.21MB/108.2MB eabd8714fec9 Extracting [====> ] 33.42MB/375MB 1e017ebebdbd Downloading [==================================> ] 26MB/37.19MB 4ba79830ebce Extracting [=================> ] 56.82MB/166.8MB 6394804c2196 Pull complete 8e665a4a2af9 Downloading [==============================================> ] 100MB/107.2MB pap Pulled 219d845251ba Downloading [================================================> ] 104.3MB/108.2MB eabd8714fec9 Extracting [======> ] 45.12MB/375MB 219d845251ba Verifying Checksum 219d845251ba Download complete 1e017ebebdbd Downloading [===============================================> ] 35.42MB/37.19MB 8e665a4a2af9 Verifying Checksum 8e665a4a2af9 Download complete 4ba79830ebce Extracting [====================> ] 68.52MB/166.8MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 82bfc142787e Downloading [> ] 97.22kB/8.613MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 55f2b468da67 Downloading [> ] 539.6kB/257.9MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB eabd8714fec9 Extracting [=======> ] 57.38MB/375MB 4ba79830ebce Extracting [=======================> ] 79.1MB/166.8MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 55f2b468da67 Downloading [==> ] 10.81MB/257.9MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete b0e0ef7895f4 Downloading [=============> ] 10.17MB/37.01MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete eabd8714fec9 Extracting [========> ] 64.06MB/375MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 4ba79830ebce Extracting [==========================> ] 86.9MB/166.8MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 55f2b468da67 Downloading [====> ] 23.25MB/257.9MB b0e0ef7895f4 Downloading [==============================> ] 22.61MB/37.01MB eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 4ba79830ebce Extracting [===========================> ] 91.91MB/166.8MB 55f2b468da67 Downloading [=======> ] 37.85MB/257.9MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB b0e0ef7895f4 Downloading [================================================> ] 36.17MB/37.01MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete eabd8714fec9 Extracting [===========> ] 86.34MB/375MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 4ba79830ebce Extracting [============================> ] 96.37MB/166.8MB 55f2b468da67 Downloading [==========> ] 53.53MB/257.9MB 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB eabd8714fec9 Extracting [============> ] 96.37MB/375MB f18232174bc9 Downloading [==========================> ] 1.965MB/3.642MB 09d5a3f70313 Downloading [======> ] 14.06MB/109.2MB 4ba79830ebce Extracting [=============================> ] 99.16MB/166.8MB 55f2b468da67 Downloading [=============> ] 68.12MB/257.9MB 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB eabd8714fec9 Extracting [=============> ] 103.6MB/375MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB e60d9caeb0b8 Download complete 09d5a3f70313 Downloading [========> ] 19.46MB/109.2MB 4ba79830ebce Extracting [===============================> ] 103.6MB/166.8MB 55f2b468da67 Downloading [================> ] 82.72MB/257.9MB f61a19743345 Downloading [> ] 48.06kB/3.524MB 1e017ebebdbd Extracting [=========================> ] 18.87MB/37.19MB eabd8714fec9 Extracting [==============> ] 107MB/375MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 09d5a3f70313 Downloading [===========> ] 24.87MB/109.2MB 4ba79830ebce Extracting [================================> ] 107MB/166.8MB 55f2b468da67 Downloading [==================> ] 97.32MB/257.9MB f61a19743345 Downloading [================> ] 1.179MB/3.524MB 1e017ebebdbd Extracting [===============================> ] 23.59MB/37.19MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 09d5a3f70313 Downloading [==============> ] 30.82MB/109.2MB 55f2b468da67 Downloading [=====================> ] 111.4MB/257.9MB f61a19743345 Downloading [==================================> ] 2.457MB/3.524MB 1e017ebebdbd Extracting [==================================> ] 25.95MB/37.19MB f18232174bc9 Pull complete e60d9caeb0b8 Extracting [==================================================>] 140B/140B 4ba79830ebce Extracting [=================================> ] 110.3MB/166.8MB e60d9caeb0b8 Extracting [==================================================>] 140B/140B eabd8714fec9 Extracting [===============> ] 114.2MB/375MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Verifying Checksum f61a19743345 Download complete 09d5a3f70313 Downloading [================> ] 36.22MB/109.2MB 55f2b468da67 Downloading [=========================> ] 129.2MB/257.9MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB 4ba79830ebce Extracting [==================================> ] 114.2MB/166.8MB eabd8714fec9 Extracting [===============> ] 117MB/375MB 09d5a3f70313 Downloading [===================> ] 42.71MB/109.2MB 55f2b468da67 Downloading [============================> ] 144.9MB/257.9MB 8af57d8c9f49 Downloading [=================> ] 3.046MB/8.735MB e60d9caeb0b8 Pull complete 1e017ebebdbd Extracting [===========================================> ] 32.64MB/37.19MB f61a19743345 Extracting [> ] 65.54kB/3.524MB 4ba79830ebce Extracting [===================================> ] 118.1MB/166.8MB eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 09d5a3f70313 Downloading [=========================> ] 55.69MB/109.2MB 55f2b468da67 Downloading [===============================> ] 161.7MB/257.9MB 8af57d8c9f49 Downloading [==============================================> ] 8.158MB/8.735MB 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB c53a11b7c6fc Verifying Checksum c53a11b7c6fc Download complete 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB f61a19743345 Extracting [====> ] 327.7kB/3.524MB 4ba79830ebce Extracting [====================================> ] 122.6MB/166.8MB eabd8714fec9 Extracting [================> ] 122.6MB/375MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Download complete 09d5a3f70313 Downloading [===============================> ] 68.12MB/109.2MB 55f2b468da67 Downloading [=================================> ] 173MB/257.9MB c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 1e017ebebdbd Extracting [=================================================> ] 36.57MB/37.19MB f61a19743345 Extracting [==============================================> ] 3.277MB/3.524MB 4ba79830ebce Extracting [=====================================> ] 125.3MB/166.8MB eabd8714fec9 Extracting [================> ] 125.9MB/375MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 09d5a3f70313 Downloading [======================================> ] 84.34MB/109.2MB 55f2b468da67 Downloading [====================================> ] 187.6MB/257.9MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB c49e0ee60bfb Downloading [====> ] 8.65MB/107.3MB 1e017ebebdbd Pull complete 4ba79830ebce Extracting [======================================> ] 127.6MB/166.8MB eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 09d5a3f70313 Downloading [=============================================> ] 98.4MB/109.2MB 55f2b468da67 Downloading [======================================> ] 199MB/257.9MB c49e0ee60bfb Downloading [=======> ] 16.76MB/107.3MB f61a19743345 Pull complete 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 09d5a3f70313 Download complete 4ba79830ebce Extracting [=======================================> ] 130.9MB/166.8MB eabd8714fec9 Extracting [=================> ] 132MB/375MB 55f2b468da67 Downloading [=========================================> ] 214.6MB/257.9MB c49e0ee60bfb Downloading [============> ] 27.57MB/107.3MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB eabd8714fec9 Extracting [==================> ] 135.4MB/375MB 4ba79830ebce Extracting [========================================> ] 134.8MB/166.8MB 55f2b468da67 Downloading [============================================> ] 229.2MB/257.9MB c49e0ee60bfb Downloading [===================> ] 42.71MB/107.3MB 384497dbce3b Downloading [=====> ] 6.487MB/63.48MB 8af57d8c9f49 Extracting [========================> ] 4.325MB/8.735MB eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 55f2b468da67 Downloading [===============================================> ] 244.4MB/257.9MB 4ba79830ebce Extracting [=========================================> ] 138.7MB/166.8MB c49e0ee60bfb Downloading [==========================> ] 57.85MB/107.3MB 384497dbce3b Downloading [==============> ] 18.92MB/63.48MB 8af57d8c9f49 Extracting [=============================================> ] 7.963MB/8.735MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 55f2b468da67 Downloading [=================================================> ] 257.9MB/257.9MB 55f2b468da67 Download complete eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 4ba79830ebce Extracting [==========================================> ] 143.2MB/166.8MB 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Download complete 8af57d8c9f49 Pull complete c49e0ee60bfb Downloading [===============================> ] 68.12MB/107.3MB 384497dbce3b Downloading [====================> ] 26.49MB/63.48MB c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB b176d7edde70 Verifying Checksum b176d7edde70 Download complete 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 384497dbce3b Downloading [===============================> ] 40.01MB/63.48MB 4ba79830ebce Extracting [===========================================> ] 146.5MB/166.8MB c49e0ee60bfb Downloading [======================================> ] 81.64MB/107.3MB eabd8714fec9 Extracting [===================> ] 145.4MB/375MB 2d429b9e73a6 Downloading [==========> ] 6.192MB/29.13MB c53a11b7c6fc Pull complete e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 55f2b468da67 Extracting [=> ] 5.571MB/257.9MB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB eabd8714fec9 Extracting [===================> ] 147.1MB/375MB 384497dbce3b Downloading [=====================================> ] 47.04MB/63.48MB c49e0ee60bfb Downloading [=========================================> ] 89.21MB/107.3MB 4ba79830ebce Extracting [============================================> ] 148.7MB/166.8MB 2d429b9e73a6 Downloading [=======================> ] 13.86MB/29.13MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete 55f2b468da67 Extracting [==> ] 15.04MB/257.9MB c49e0ee60bfb Downloading [===============================================> ] 102.7MB/107.3MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete 4ba79830ebce Extracting [=============================================> ] 153.2MB/166.8MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete c4d302cc468d Downloading [> ] 48.06kB/4.534MB 2d429b9e73a6 Downloading [=======================================> ] 23.3MB/29.13MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB e032d0a5e409 Pull complete 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 4ba79830ebce Extracting [===============================================> ] 157.6MB/166.8MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 531ee2cf3c0c Downloading [=========================================> ] 6.716MB/8.066MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 531ee2cf3c0c Download complete 7e568a0dc8fb Download complete 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 9fa9226be034 Downloading [> ] 15.3kB/783kB eabd8714fec9 Extracting [====================> ] 156MB/375MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 4ba79830ebce Extracting [===============================================> ] 159.3MB/166.8MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB e73cb4a42719 Downloading [===> ] 8.109MB/109.1MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB 4ba79830ebce Extracting [================================================> ] 161.5MB/166.8MB eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB e73cb4a42719 Downloading [========> ] 18.38MB/109.1MB 6ac0e4adf315 Downloading [====> ] 5.946MB/62.07MB c49e0ee60bfb Extracting [==> ] 5.571MB/107.3MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB f3b09c502777 Downloading [===> ] 4.324MB/56.52MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 55f2b468da67 Extracting [=====> ] 30.08MB/257.9MB 2d429b9e73a6 Extracting [==========> ] 5.898MB/29.13MB 4ba79830ebce Extracting [=================================================> ] 163.8MB/166.8MB e73cb4a42719 Downloading [=============> ] 29.74MB/109.1MB 6ac0e4adf315 Downloading [=============> ] 16.22MB/62.07MB eabd8714fec9 Extracting [=====================> ] 161MB/375MB c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB 9fa9226be034 Pull complete f3b09c502777 Downloading [========> ] 9.19MB/56.52MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 55f2b468da67 Extracting [=======> ] 36.21MB/257.9MB 2d429b9e73a6 Extracting [=============> ] 7.668MB/29.13MB e73cb4a42719 Downloading [=====================> ] 45.96MB/109.1MB 4ba79830ebce Extracting [=================================================> ] 166MB/166.8MB 6ac0e4adf315 Downloading [======================> ] 28.11MB/62.07MB eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB f3b09c502777 Downloading [=============> ] 15.14MB/56.52MB 55f2b468da67 Extracting [========> ] 44.01MB/257.9MB 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB e73cb4a42719 Downloading [===========================> ] 58.93MB/109.1MB 6ac0e4adf315 Downloading [================================> ] 40.55MB/62.07MB eabd8714fec9 Extracting [======================> ] 166MB/375MB f3b09c502777 Downloading [===================> ] 22.17MB/56.52MB 55f2b468da67 Extracting [=========> ] 51.25MB/257.9MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 2d429b9e73a6 Extracting [===================> ] 11.21MB/29.13MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB e73cb4a42719 Downloading [================================> ] 71.37MB/109.1MB c49e0ee60bfb Extracting [======> ] 14.48MB/107.3MB 6ac0e4adf315 Downloading [=========================================> ] 51.36MB/62.07MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB f3b09c502777 Downloading [==========================> ] 30.28MB/56.52MB 55f2b468da67 Extracting [===========> ] 57.93MB/257.9MB e73cb4a42719 Downloading [=====================================> ] 81.64MB/109.1MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 2d429b9e73a6 Extracting [======================> ] 13.27MB/29.13MB eabd8714fec9 Extracting [======================> ] 172.1MB/375MB 408012a7b118 Downloading [==================================================>] 637B/637B c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB f3b09c502777 Downloading [=================================> ] 37.31MB/56.52MB e73cb4a42719 Downloading [=========================================> ] 91.37MB/109.1MB 408012a7b118 Verifying Checksum 408012a7b118 Download complete 2d429b9e73a6 Extracting [=======================> ] 13.57MB/29.13MB eabd8714fec9 Extracting [========================> ] 180.5MB/375MB 55f2b468da67 Extracting [============> ] 63.5MB/257.9MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f3b09c502777 Downloading [===============================================> ] 54.07MB/56.52MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete f3b09c502777 Verifying Checksum f3b09c502777 Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete e73cb4a42719 Downloading [================================================> ] 106.5MB/109.1MB 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete eabd8714fec9 Extracting [=========================> ] 189.4MB/375MB 2d429b9e73a6 Extracting [=============================> ] 17.4MB/29.13MB c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 55f2b468da67 Extracting [==============> ] 72.97MB/257.9MB eabd8714fec9 Extracting [==========================> ] 200MB/375MB c49e0ee60bfb Extracting [==========> ] 21.73MB/107.3MB 55f2b468da67 Extracting [===============> ] 82.44MB/257.9MB 2d429b9e73a6 Extracting [====================================> ] 21.53MB/29.13MB 110a13bd01fb Downloading [> ] 539.6kB/71.86MB eabd8714fec9 Extracting [===========================> ] 209.5MB/375MB 55f2b468da67 Extracting [=================> ] 88.57MB/257.9MB c49e0ee60bfb Extracting [=============> ] 29.52MB/107.3MB d4108afce2f7 Downloading [==================================================>] 1.073kB/1.073kB d4108afce2f7 Verifying Checksum d4108afce2f7 Download complete 110a13bd01fb Downloading [=======> ] 10.81MB/71.86MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 55f2b468da67 Extracting [==================> ] 95.81MB/257.9MB c49e0ee60bfb Extracting [===============> ] 32.87MB/107.3MB 1617e25568b2 Pull complete 4ba79830ebce Pull complete eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 110a13bd01fb Downloading [==============> ] 21.09MB/71.86MB 07255172bfd8 Downloading [============================> ] 3.003kB/5.24kB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 07255172bfd8 Download complete 110a13bd01fb Downloading [================> ] 24.33MB/71.86MB 12cf1ed9c784 Downloading [> ] 146.4kB/14.64MB 2d429b9e73a6 Extracting [===============================================> ] 27.43MB/29.13MB eabd8714fec9 Extracting [=============================> ] 218.9MB/375MB c49e0ee60bfb Extracting [================> ] 35.09MB/107.3MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 55f2b468da67 Extracting [=====================> ] 109.2MB/257.9MB 12cf1ed9c784 Downloading [=============> ] 3.98MB/14.64MB 110a13bd01fb Downloading [=========================> ] 37.31MB/71.86MB eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB c49e0ee60bfb Extracting [=================> ] 38.44MB/107.3MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 22c948928e79 Downloading [==================================================>] 1.031kB/1.031kB 22c948928e79 Verifying Checksum 22c948928e79 Download complete 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 12cf1ed9c784 Downloading [=======================================> ] 11.65MB/14.64MB 110a13bd01fb Downloading [=================================> ] 47.58MB/71.86MB eabd8714fec9 Extracting [==============================> ] 225.1MB/375MB 12cf1ed9c784 Verifying Checksum 12cf1ed9c784 Download complete c49e0ee60bfb Extracting [==================> ] 40.67MB/107.3MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 110a13bd01fb Downloading [=========================================> ] 58.93MB/71.86MB eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB c49e0ee60bfb Extracting [====================> ] 44.01MB/107.3MB e92d65bf8445 Download complete 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 55f2b468da67 Extracting [======================> ] 118.1MB/257.9MB 7910fddefabc Downloading [=======> ] 3.002kB/19.51kB 7910fddefabc Downloading [==================================================>] 19.51kB/19.51kB 7910fddefabc Verifying Checksum 7910fddefabc Download complete 110a13bd01fb Verifying Checksum 110a13bd01fb Download complete eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB c49e0ee60bfb Extracting [======================> ] 48.46MB/107.3MB 6ac0e4adf315 Extracting [==========> ] 12.81MB/62.07MB 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB d223479d7367 Extracting [> ] 98.3kB/6.742MB eabd8714fec9 Extracting [===============================> ] 234MB/375MB c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB 110a13bd01fb Extracting [> ] 557.1kB/71.86MB 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB c49e0ee60bfb Extracting [========================> ] 52.92MB/107.3MB d223479d7367 Extracting [==> ] 294.9kB/6.742MB 110a13bd01fb Extracting [=> ] 1.671MB/71.86MB eabd8714fec9 Extracting [===============================> ] 237.3MB/375MB 55f2b468da67 Extracting [========================> ] 125.9MB/257.9MB 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB eabd8714fec9 Extracting [===============================> ] 239MB/375MB 55f2b468da67 Extracting [========================> ] 127.6MB/257.9MB c49e0ee60bfb Extracting [=========================> ] 55.15MB/107.3MB d223479d7367 Extracting [========> ] 1.081MB/6.742MB 6ac0e4adf315 Extracting [==============> ] 18.38MB/62.07MB 110a13bd01fb Extracting [==> ] 3.899MB/71.86MB c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 55f2b468da67 Extracting [=========================> ] 130.4MB/257.9MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB 6ac0e4adf315 Extracting [=======================> ] 29.52MB/62.07MB eabd8714fec9 Extracting [================================> ] 245.1MB/375MB d223479d7367 Extracting [=================> ] 2.359MB/6.742MB 110a13bd01fb Extracting [===> ] 4.456MB/71.86MB c49e0ee60bfb Extracting [=============================> ] 62.95MB/107.3MB 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB d223479d7367 Extracting [============================> ] 3.834MB/6.742MB 110a13bd01fb Extracting [====> ] 6.685MB/71.86MB eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB c49e0ee60bfb Extracting [==============================> ] 66.29MB/107.3MB 6ac0e4adf315 Extracting [===================================> ] 44.01MB/62.07MB 55f2b468da67 Extracting [===========================> ] 143.2MB/257.9MB 2d429b9e73a6 Pull complete d223479d7367 Extracting [========================================> ] 5.505MB/6.742MB 110a13bd01fb Extracting [======> ] 8.913MB/71.86MB eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB c49e0ee60bfb Extracting [===============================> ] 68.52MB/107.3MB 6ac0e4adf315 Extracting [==========================================> ] 52.92MB/62.07MB 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB d223479d7367 Extracting [=================================================> ] 6.685MB/6.742MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 110a13bd01fb Extracting [========> ] 12.26MB/71.86MB 6ac0e4adf315 Extracting [===============================================> ] 59.05MB/62.07MB eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 110a13bd01fb Extracting [========> ] 12.81MB/71.86MB c49e0ee60bfb Extracting [=================================> ] 71.3MB/107.3MB 55f2b468da67 Extracting [============================> ] 148.7MB/257.9MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB 110a13bd01fb Extracting [==========> ] 15.6MB/71.86MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB eabd8714fec9 Extracting [==================================> ] 255.1MB/375MB 110a13bd01fb Extracting [============> ] 18.38MB/71.86MB c49e0ee60bfb Extracting [====================================> ] 77.43MB/107.3MB 55f2b468da67 Extracting [==============================> ] 157.1MB/257.9MB 55f2b468da67 Extracting [==============================> ] 158.2MB/257.9MB 110a13bd01fb Extracting [===============> ] 22.84MB/71.86MB c49e0ee60bfb Extracting [=====================================> ] 80.22MB/107.3MB eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 55f2b468da67 Extracting [===============================> ] 161.5MB/257.9MB eabd8714fec9 Extracting [==================================> ] 261.3MB/375MB c49e0ee60bfb Extracting [======================================> ] 82.44MB/107.3MB 110a13bd01fb Extracting [==================> ] 26.18MB/71.86MB 55f2b468da67 Extracting [================================> ] 166MB/257.9MB c49e0ee60bfb Extracting [=======================================> ] 85.23MB/107.3MB eabd8714fec9 Extracting [===================================> ] 266.3MB/375MB 110a13bd01fb Extracting [====================> ] 29.52MB/71.86MB c49e0ee60bfb Extracting [=========================================> ] 88.01MB/107.3MB 110a13bd01fb Extracting [=====================> ] 30.64MB/71.86MB eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 110a13bd01fb Extracting [======================> ] 31.75MB/71.86MB c49e0ee60bfb Extracting [==========================================> ] 91.36MB/107.3MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB c49e0ee60bfb Extracting [============================================> ] 95.81MB/107.3MB 110a13bd01fb Extracting [========================> ] 35.09MB/71.86MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB c49e0ee60bfb Extracting [==============================================> ] 100.3MB/107.3MB 110a13bd01fb Extracting [==========================> ] 37.88MB/71.86MB 110a13bd01fb Extracting [=============================> ] 42.89MB/71.86MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB 110a13bd01fb Extracting [===============================> ] 45.68MB/71.86MB c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 110a13bd01fb Extracting [================================> ] 47.35MB/71.86MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 110a13bd01fb Extracting [=================================> ] 48.46MB/71.86MB 6ac0e4adf315 Pull complete eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 110a13bd01fb Extracting [==================================> ] 49.58MB/71.86MB c49e0ee60bfb Extracting [=================================================> ] 105.3MB/107.3MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB d223479d7367 Pull complete c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 110a13bd01fb Extracting [===================================> ] 51.25MB/71.86MB 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 110a13bd01fb Extracting [=====================================> ] 53.48MB/71.86MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 110a13bd01fb Extracting [======================================> ] 55.15MB/71.86MB 46eab5b44a35 Pull complete 110a13bd01fb Extracting [=========================================> ] 60.16MB/71.86MB 110a13bd01fb Extracting [==============================================> ] 66.85MB/71.86MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 110a13bd01fb Extracting [=================================================> ] 71.3MB/71.86MB eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 55f2b468da67 Extracting [==================================> ] 180.5MB/257.9MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB eabd8714fec9 Extracting [=====================================> ] 280.2MB/375MB 55f2b468da67 Extracting [====================================> ] 187.2MB/257.9MB eabd8714fec9 Extracting [======================================> ] 285.8MB/375MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB f3b09c502777 Extracting [===> ] 4.456MB/56.52MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB f3b09c502777 Extracting [=======> ] 8.356MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB f3b09c502777 Extracting [=========> ] 11.14MB/56.52MB f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB f3b09c502777 Extracting [==============> ] 16.15MB/56.52MB f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB f3b09c502777 Extracting [===================> ] 22.28MB/56.52MB f3b09c502777 Extracting [========================> ] 27.3MB/56.52MB 55f2b468da67 Extracting [======================================> ] 198.9MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 297.5MB/375MB f3b09c502777 Extracting [==============================> ] 33.98MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB f3b09c502777 Extracting [========================================> ] 45.68MB/56.52MB c4d302cc468d Extracting [> ] 65.54kB/4.534MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB f3b09c502777 Extracting [=============================================> ] 51.81MB/56.52MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB eabd8714fec9 Extracting [========================================> ] 303MB/375MB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB c4d302cc468d Extracting [=====================================> ] 3.408MB/4.534MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB c49e0ee60bfb Pull complete 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 110a13bd01fb Pull complete eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 7ce9630189bb Extracting [> ] 327.7kB/31.04MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 7ce9630189bb Extracting [=====> ] 3.277MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 7ce9630189bb Extracting [=======> ] 4.588MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 7ce9630189bb Extracting [=======> ] 4.915MB/31.04MB f3b09c502777 Pull complete c4d302cc468d Pull complete 12cf1ed9c784 Extracting [> ] 163.8kB/14.64MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 7ce9630189bb Extracting [========> ] 5.243MB/31.04MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 384497dbce3b Extracting [> ] 557.1kB/63.48MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 12cf1ed9c784 Extracting [=> ] 327.7kB/14.64MB 7ce9630189bb Extracting [===========> ] 6.881MB/31.04MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 408012a7b118 Pull complete 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 384497dbce3b Extracting [> ] 1.114MB/63.48MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB eabd8714fec9 Extracting [==========================================> ] 315.9MB/375MB 12cf1ed9c784 Extracting [==========> ] 2.949MB/14.64MB 7ce9630189bb Extracting [============> ] 7.537MB/31.04MB 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB 12cf1ed9c784 Extracting [================> ] 4.751MB/14.64MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 7ce9630189bb Extracting [==============> ] 8.847MB/31.04MB 12cf1ed9c784 Extracting [=================> ] 5.079MB/14.64MB eabd8714fec9 Extracting [==========================================> ] 319.2MB/375MB 7ce9630189bb Extracting [==============> ] 9.175MB/31.04MB 55f2b468da67 Extracting [==========================================> ] 217.8MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 12cf1ed9c784 Extracting [===================> ] 5.571MB/14.64MB 7ce9630189bb Extracting [===============> ] 9.503MB/31.04MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 218.9MB/257.9MB 12cf1ed9c784 Extracting [====================> ] 6.062MB/14.64MB 7ce9630189bb Extracting [===============> ] 9.83MB/31.04MB eabd8714fec9 Extracting [===========================================> ] 322.5MB/375MB 55f2b468da67 Extracting [==========================================> ] 220MB/257.9MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 7ce9630189bb Extracting [=================> ] 10.81MB/31.04MB 12cf1ed9c784 Extracting [=======================> ] 6.881MB/14.64MB eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 7ce9630189bb Extracting [====================> ] 12.45MB/31.04MB 12cf1ed9c784 Extracting [========================> ] 7.209MB/14.64MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 01e0882c90d9 Pull complete 12cf1ed9c784 Extracting [==========================> ] 7.7MB/14.64MB 7ce9630189bb Extracting [==========================> ] 16.71MB/31.04MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 324.2MB/375MB 12cf1ed9c784 Extracting [===========================> ] 8.192MB/14.64MB 7ce9630189bb Extracting [=============================> ] 18.35MB/31.04MB 44986281b8b9 Pull complete eabd8714fec9 Extracting [===========================================> ] 325.3MB/375MB 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 7ce9630189bb Extracting [=================================> ] 20.64MB/31.04MB 12cf1ed9c784 Extracting [============================> ] 8.356MB/14.64MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 7ce9630189bb Extracting [==================================> ] 21.3MB/31.04MB 12cf1ed9c784 Extracting [=============================> ] 8.684MB/14.64MB 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 531ee2cf3c0c Extracting [==========> ] 1.671MB/8.066MB 12cf1ed9c784 Extracting [=====================================> ] 10.98MB/14.64MB 7ce9630189bb Extracting [==================================> ] 21.63MB/31.04MB 384497dbce3b Extracting [===> ] 3.899MB/63.48MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB bf70c5107ab5 Pull complete 531ee2cf3c0c Extracting [=======================> ] 3.736MB/8.066MB 12cf1ed9c784 Extracting [=======================================> ] 11.47MB/14.64MB 7ce9630189bb Extracting [====================================> ] 22.61MB/31.04MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 531ee2cf3c0c Extracting [=============================> ] 4.817MB/8.066MB 12cf1ed9c784 Extracting [=========================================> ] 12.12MB/14.64MB 7ce9630189bb Extracting [======================================> ] 23.92MB/31.04MB 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 12cf1ed9c784 Extracting [===========================================> ] 12.62MB/14.64MB 531ee2cf3c0c Extracting [=====================================> ] 6.095MB/8.066MB 12cf1ed9c784 Extracting [===============================================> ] 13.76MB/14.64MB 12cf1ed9c784 Extracting [==================================================>] 14.64MB/14.64MB 531ee2cf3c0c Extracting [========================================> ] 6.586MB/8.066MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 7ce9630189bb Extracting [========================================> ] 24.9MB/31.04MB 531ee2cf3c0c Extracting [==============================================> ] 7.471MB/8.066MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 7ce9630189bb Extracting [========================================> ] 25.23MB/31.04MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB 384497dbce3b Extracting [=====> ] 6.685MB/63.48MB 7ce9630189bb Extracting [============================================> ] 27.53MB/31.04MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 7ce9630189bb Extracting [============================================> ] 27.85MB/31.04MB 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB 12cf1ed9c784 Pull complete 1ccde423731d Pull complete 7ce9630189bb Extracting [=============================================> ] 28.18MB/31.04MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB 531ee2cf3c0c Pull complete d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 7ce9630189bb Extracting [=============================================> ] 28.51MB/31.04MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 7ce9630189bb Extracting [==============================================> ] 28.84MB/31.04MB 7ce9630189bb Extracting [==================================================>] 31.04MB/31.04MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B 384497dbce3b Extracting [======> ] 8.356MB/63.48MB eabd8714fec9 Extracting [============================================> ] 333.1MB/375MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 338.7MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB 384497dbce3b Extracting [========> ] 11.14MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 234.5MB/257.9MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB d4108afce2f7 Pull complete eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 384497dbce3b Extracting [==========> ] 12.81MB/63.48MB 7221d93db8a9 Pull complete 7ce9630189bb Pull complete 384497dbce3b Extracting [==========> ] 13.37MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 384497dbce3b Extracting [===========> ] 15.04MB/63.48MB 55f2b468da67 Extracting [==============================================> ] 237.3MB/257.9MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB ed54a7dee1d8 Pull complete 2d7f854c01cf Extracting [==================================================>] 372B/372B 2d7f854c01cf Extracting [==================================================>] 372B/372B 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 07255172bfd8 Pull complete 2d7f854c01cf Pull complete 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 384497dbce3b Extracting [================> ] 20.61MB/63.48MB 384497dbce3b Extracting [==================> ] 23.4MB/63.48MB 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB 384497dbce3b Extracting [=======================> ] 29.52MB/63.48MB 384497dbce3b Extracting [=========================> ] 32.31MB/63.48MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 7df673c7455d Pull complete 384497dbce3b Extracting [===========================> ] 34.54MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 55f2b468da67 Extracting [================================================> ] 248.4MB/257.9MB 8e665a4a2af9 Extracting [> ] 557.1kB/107.2MB 12c5c803443f Pull complete 8e665a4a2af9 Extracting [> ] 1.671MB/107.2MB 55f2b468da67 Extracting [=================================================> ] 252.9MB/257.9MB 384497dbce3b Extracting [============================> ] 36.77MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 8e665a4a2af9 Extracting [======> ] 13.93MB/107.2MB 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB 384497dbce3b Extracting [==============================> ] 38.44MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 8e665a4a2af9 Extracting [============> ] 27.3MB/107.2MB 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB 384497dbce3b Extracting [================================> ] 41.22MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 8e665a4a2af9 Extracting [==================> ] 38.99MB/107.2MB 384497dbce3b Extracting [=================================> ] 42.89MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 347.6MB/375MB 8e665a4a2af9 Extracting [========================> ] 51.81MB/107.2MB 384497dbce3b Extracting [===================================> ] 45.68MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 8e665a4a2af9 Extracting [==============================> ] 64.62MB/107.2MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 8e665a4a2af9 Extracting [==============================> ] 65.18MB/107.2MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 22c948928e79 Pull complete e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 55f2b468da67 Pull complete prometheus Pulled 82bfc142787e Extracting [> ] 98.3kB/8.613MB eabd8714fec9 Extracting [===============================================> ] 353.7MB/375MB 384497dbce3b Extracting [=====================================> ] 47.91MB/63.48MB 8e665a4a2af9 Extracting [====================================> ] 77.43MB/107.2MB 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB e27c75a98748 Pull complete e92d65bf8445 Pull complete 8e665a4a2af9 Extracting [=======================================> ] 84.67MB/107.2MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB 82bfc142787e Extracting [==============================> ] 5.21MB/8.613MB eabd8714fec9 Extracting [===============================================> ] 358.2MB/375MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 8e665a4a2af9 Extracting [=============================================> ] 97.48MB/107.2MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 384497dbce3b Extracting [=========================================> ] 52.36MB/63.48MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 8e665a4a2af9 Extracting [==================================================>] 107.2MB/107.2MB eabd8714fec9 Extracting [================================================> ] 363.8MB/375MB 8e665a4a2af9 Pull complete 7910fddefabc Pull complete e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB policy-db-migrator Pulled 384497dbce3b Extracting [==========================================> ] 54.59MB/63.48MB 46baca71a4ef Pull complete e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB eabd8714fec9 Extracting [================================================> ] 367.1MB/375MB 219d845251ba Extracting [> ] 557.1kB/108.2MB 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB e73cb4a42719 Extracting [====> ] 10.58MB/109.1MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 219d845251ba Extracting [===> ] 7.799MB/108.2MB eabd8714fec9 Extracting [=================================================> ] 369.9MB/375MB 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB 219d845251ba Extracting [=======> ] 16.15MB/108.2MB eabd8714fec9 Extracting [=================================================> ] 372.1MB/375MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB b0e0ef7895f4 Extracting [=====================> ] 16.12MB/37.01MB e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 219d845251ba Extracting [============> ] 27.3MB/108.2MB eabd8714fec9 Extracting [=================================================> ] 374.9MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB b0e0ef7895f4 Extracting [=================================> ] 24.77MB/37.01MB e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB 219d845251ba Extracting [===============> ] 34.54MB/108.2MB b0e0ef7895f4 Extracting [=========================================> ] 31.06MB/37.01MB 219d845251ba Extracting [==================> ] 40.67MB/108.2MB 384497dbce3b Pull complete 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 219d845251ba Extracting [=======================> ] 50.14MB/108.2MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 055b9255fa03 Pull complete eabd8714fec9 Pull complete b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB e73cb4a42719 Extracting [=============> ] 30.08MB/109.1MB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 219d845251ba Extracting [==========================> ] 56.82MB/108.2MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 219d845251ba Extracting [==============================> ] 65.73MB/108.2MB e73cb4a42719 Extracting [===============> ] 33.98MB/109.1MB b176d7edde70 Pull complete grafana Pulled 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 219d845251ba Extracting [==================================> ] 74.65MB/108.2MB e73cb4a42719 Extracting [==================> ] 40.67MB/109.1MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 8f10199ed94b Extracting [==> ] 491.5kB/8.768MB 219d845251ba Extracting [=======================================> ] 86.34MB/108.2MB e73cb4a42719 Extracting [=====================> ] 46.79MB/109.1MB 8f10199ed94b Extracting [============================================> ] 7.766MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 219d845251ba Extracting [=============================================> ] 97.48MB/108.2MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 219d845251ba Extracting [==================================================>] 108.2MB/108.2MB e040ea11fa10 Pull complete 219d845251ba Pull complete e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB drools-pdp Pulled f963a77d2726 Pull complete 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [=====> ] 11.7MB/109.2MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB f3a82e9f1761 Extracting [========> ] 7.34MB/44.41MB 09d5a3f70313 Extracting [===========> ] 24.51MB/109.2MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB f3a82e9f1761 Extracting [=====================> ] 18.81MB/44.41MB 09d5a3f70313 Extracting [================> ] 36.77MB/109.2MB e73cb4a42719 Extracting [================================> ] 71.86MB/109.1MB f3a82e9f1761 Extracting [=================================> ] 29.82MB/44.41MB 09d5a3f70313 Extracting [========================> ] 52.92MB/109.2MB e73cb4a42719 Extracting [===================================> ] 76.87MB/109.1MB f3a82e9f1761 Extracting [===============================================> ] 42.21MB/44.41MB 09d5a3f70313 Extracting [==============================> ] 66.85MB/109.2MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB e73cb4a42719 Extracting [=====================================> ] 81.89MB/109.1MB 09d5a3f70313 Extracting [=====================================> ] 81.33MB/109.2MB e73cb4a42719 Extracting [========================================> ] 88.57MB/109.1MB 09d5a3f70313 Extracting [===========================================> ] 95.81MB/109.2MB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB 09d5a3f70313 Extracting [================================================> ] 105.8MB/109.2MB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B e73cb4a42719 Extracting [============================================> ] 96.37MB/109.1MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB e73cb4a42719 Extracting [============================================> ] 98.04MB/109.1MB 2e8a7df9c2ee Pull complete 09d5a3f70313 Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 10f05dd8b1db Pull complete 356f5c2c843b Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B kafka Pulled e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB da3ed5db7103 Extracting [===> ] 9.47MB/127.4MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB da3ed5db7103 Extracting [========> ] 21.73MB/127.4MB e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB da3ed5db7103 Extracting [==============> ] 37.32MB/127.4MB a83b68436f09 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B da3ed5db7103 Extracting [=====================> ] 54.59MB/127.4MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B da3ed5db7103 Extracting [=============================> ] 74.09MB/127.4MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB da3ed5db7103 Extracting [===================================> ] 90.24MB/127.4MB da3ed5db7103 Extracting [=========================================> ] 106.4MB/127.4MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B da3ed5db7103 Extracting [==============================================> ] 118.7MB/127.4MB 7e568a0dc8fb Pull complete postgres Pulled da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container prometheus Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container prometheus Starting Container zookeeper Starting Container postgres Starting Container zookeeper Started Container kafka Starting Container kafka Started Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for drools-pdp to start... Checking if REST port 30216 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:59dd1288b2f926cd28d46d1527e43fc9907824f906b340dd67ad29ea5e80a303 top - 07:47:32 up 4 min, 0 users, load average: 2.25, 1.91, 0.85 Tasks: 230 total, 1 running, 152 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.8 us, 3.8 sy, 0.0 ni, 76.5 id, 5.8 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 21G 27M 7.7G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 915ef216de02 policy-drools-pdp 0.56% 276.4MiB / 31.41GiB 0.86% 32.4kB / 41.2kB 0B / 8.19kB 54 12b717d76a98 policy-pap 2.57% 471.7MiB / 31.41GiB 1.47% 83.3kB / 126kB 0B / 139MB 67 6f869e23207e policy-api 0.44% 447.2MiB / 31.41GiB 1.39% 1.15MB / 985kB 0B / 0B 57 984a3f1176c4 kafka 6.26% 397.2MiB / 31.41GiB 1.23% 154kB / 138kB 0B / 582kB 83 bb9e5e4c595f grafana 0.17% 102.4MiB / 31.41GiB 0.32% 19.2MB / 208kB 0B / 30.4MB 20 471dbbebe660 zookeeper 0.09% 83.07MiB / 31.41GiB 0.26% 52.3kB / 44.1kB 4.1kB / 397kB 61 fe59f1ba32d2 prometheus 0.00% 20.68MiB / 31.41GiB 0.06% 56.5kB / 2.53kB 98.3kB / 0B 13 e8e00bf417cb postgres 0.00% 84.57MiB / 31.41GiB 0.26% 1.64MB / 1.71MB 0B / 158MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-17T07:45:42.302457151Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-17T07:45:42Z grafana | logger=settings t=2025-06-17T07:45:42.302846584Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-17T07:45:42.302881695Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-17T07:45:42.302906135Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-17T07:45:42.302944415Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-17T07:45:42.302989316Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-17T07:45:42.303042736Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-17T07:45:42.303081447Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-17T07:45:42.303107257Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-17T07:45:42.303145677Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-17T07:45:42.303174718Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-17T07:45:42.303228248Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-17T07:45:42.303266389Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-17T07:45:42.303302659Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-17T07:45:42.303345099Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-17T07:45:42.30340112Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-17T07:45:42.30343954Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-17T07:45:42.30346397Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-17T07:45:42.303506511Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-17T07:45:42.303951925Z level=info msg=FeatureToggles dashboardSceneForViewers=true publicDashboardsScene=true dashgpt=true alertRuleRestore=true ssoSettingsSAML=true dataplaneFrontendFallback=true alertingSimplifiedRouting=true ssoSettingsApi=true logRowsPopoverMenu=true cloudWatchNewLabelParsing=true logsPanelControls=true onPremToCloudMigrations=true influxdbBackendMigration=true azureMonitorEnableUserAuth=true alertingUIOptimizeReducer=true recoveryThreshold=true awsAsyncQueryCaching=true newPDFRendering=true cloudWatchRoundUpEndTime=true failWrongDSUID=true preinstallAutoUpdate=true lokiLabelNamesQueryApi=true annotationPermissionUpdate=true alertingInsights=true promQLScope=true kubernetesClientDashboardsFolders=true prometheusAzureOverrideAudience=true correlations=true unifiedStorageSearchPermissionFiltering=true externalCorePlugins=true unifiedRequestLog=true reportingUseRawTimeRange=true cloudWatchCrossAccountQuerying=true nestedFolders=true alertingRulePermanentlyDelete=true prometheusUsesCombobox=true lokiQuerySplitting=true angularDeprecationUI=true tlsMemcached=true useSessionStorageForRedirection=true alertingApiServer=true logsExploreTableVisualisation=true addFieldFromCalculationStatFunctions=true newFiltersUI=true formatString=true groupToNestedTableTransformation=true recordedQueriesMulti=true grafanaconThemes=true newDashboardSharingComponent=true kubernetesPlaylists=true alertingRuleVersionHistoryRestore=true alertingRuleRecoverDeleted=true dashboardSceneSolo=true lokiStructuredMetadata=true logsContextDatasourceUi=true transformationsRedesign=true lokiQueryHints=true alertingQueryAndExpressionsStepMode=true alertingNotificationsStepMode=true dashboardScene=true logsInfiniteScrolling=true pluginsDetailsRightPanel=true azureMonitorPrometheusExemplars=true pinNavItems=true panelMonitoring=true grafana | logger=sqlstore t=2025-06-17T07:45:42.304042436Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-17T07:45:42.304103277Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-17T07:45:42.305797044Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-17T07:45:42.305833665Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-17T07:45:42.306529272Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-17T07:45:42.3073772Z level=info msg="Migration successfully executed" id="create migration_log table" duration=848.879µs grafana | logger=migrator t=2025-06-17T07:45:42.313878047Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-17T07:45:42.314691735Z level=info msg="Migration successfully executed" id="create user table" duration=813.208µs grafana | logger=migrator t=2025-06-17T07:45:42.319303282Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-17T07:45:42.320637575Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.333933ms grafana | logger=migrator t=2025-06-17T07:45:42.326317203Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-17T07:45:42.327114601Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=797.128µs grafana | logger=migrator t=2025-06-17T07:45:42.330120342Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-17T07:45:42.33083693Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=715.958µs grafana | logger=migrator t=2025-06-17T07:45:42.33379872Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-17T07:45:42.334500317Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=701.237µs grafana | logger=migrator t=2025-06-17T07:45:42.340481898Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-17T07:45:42.343079884Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.594846ms grafana | logger=migrator t=2025-06-17T07:45:42.345995494Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-17T07:45:42.346948143Z level=info msg="Migration successfully executed" id="create user table v2" duration=952.459µs grafana | logger=migrator t=2025-06-17T07:45:42.350017635Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-17T07:45:42.350807073Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=789.038µs grafana | logger=migrator t=2025-06-17T07:45:42.356213298Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-17T07:45:42.357008626Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=794.678µs grafana | logger=migrator t=2025-06-17T07:45:42.359947866Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:42.360389221Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=440.285µs grafana | logger=migrator t=2025-06-17T07:45:42.363375351Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-17T07:45:42.363988308Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=618.467µs grafana | logger=migrator t=2025-06-17T07:45:42.366341421Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-17T07:45:42.367545474Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.202203ms grafana | logger=migrator t=2025-06-17T07:45:42.372750007Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-17T07:45:42.372850398Z level=info msg="Migration successfully executed" id="Update user table charset" duration=99.541µs grafana | logger=migrator t=2025-06-17T07:45:42.375852599Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-17T07:45:42.37700508Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.152301ms grafana | logger=migrator t=2025-06-17T07:45:42.379999891Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-17T07:45:42.380303584Z level=info msg="Migration successfully executed" id="Add missing user data" duration=303.263µs grafana | logger=migrator t=2025-06-17T07:45:42.383272394Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-17T07:45:42.384521077Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.247643ms grafana | logger=migrator t=2025-06-17T07:45:42.390831681Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-17T07:45:42.3916865Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=850.408µs grafana | logger=migrator t=2025-06-17T07:45:42.394522938Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-17T07:45:42.395800592Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.276524ms grafana | logger=migrator t=2025-06-17T07:45:42.398684011Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-17T07:45:42.407184358Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.499507ms grafana | logger=migrator t=2025-06-17T07:45:42.41035075Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-17T07:45:42.411672344Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.321044ms grafana | logger=migrator t=2025-06-17T07:45:42.419786766Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-17T07:45:42.420209741Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=421.875µs grafana | logger=migrator t=2025-06-17T07:45:42.423270012Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-17T07:45:42.424398344Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.128442ms grafana | logger=migrator t=2025-06-17T07:45:42.427267093Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-17T07:45:42.428580867Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.313174ms grafana | logger=migrator t=2025-06-17T07:45:42.433267144Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-17T07:45:42.433674528Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=407.784µs grafana | logger=migrator t=2025-06-17T07:45:42.436823481Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-17T07:45:42.437476687Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=652.606µs grafana | logger=migrator t=2025-06-17T07:45:42.440463438Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-17T07:45:42.440993973Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=529.745µs grafana | logger=migrator t=2025-06-17T07:45:42.443968543Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-17T07:45:42.444407607Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=438.134µs grafana | logger=migrator t=2025-06-17T07:45:42.449459709Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-17T07:45:42.450391088Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=930.859µs grafana | logger=migrator t=2025-06-17T07:45:42.45353147Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-17T07:45:42.454339659Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=808.049µs grafana | logger=migrator t=2025-06-17T07:45:42.45742579Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-17T07:45:42.458227548Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=801.478µs grafana | logger=migrator t=2025-06-17T07:45:42.462999027Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-17T07:45:42.463861576Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=862.149µs grafana | logger=migrator t=2025-06-17T07:45:42.466570933Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-17T07:45:42.467370861Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=799.568µs grafana | logger=migrator t=2025-06-17T07:45:42.470303341Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-17T07:45:42.470373212Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=73.251µs grafana | logger=migrator t=2025-06-17T07:45:42.473211052Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-17T07:45:42.473992039Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=776.568µs grafana | logger=migrator t=2025-06-17T07:45:42.478432164Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-17T07:45:42.479172242Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=739.828µs grafana | logger=migrator t=2025-06-17T07:45:42.48189726Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-17T07:45:42.482639287Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=741.337µs grafana | logger=migrator t=2025-06-17T07:45:42.48779446Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-17T07:45:42.489984482Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=2.192112ms grafana | logger=migrator t=2025-06-17T07:45:42.493755351Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:42.497559309Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.804698ms grafana | logger=migrator t=2025-06-17T07:45:42.500469059Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-17T07:45:42.501280448Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=810.569µs grafana | logger=migrator t=2025-06-17T07:45:42.506332868Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-17T07:45:42.507296569Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=963.431µs grafana | logger=migrator t=2025-06-17T07:45:42.510467711Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-17T07:45:42.51137405Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=905.869µs grafana | logger=migrator t=2025-06-17T07:45:42.514687354Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-17T07:45:42.515869297Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.181423ms grafana | logger=migrator t=2025-06-17T07:45:42.520963859Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-17T07:45:42.521902388Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=938.019µs grafana | logger=migrator t=2025-06-17T07:45:42.524949109Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:42.525500524Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=551.025µs grafana | logger=migrator t=2025-06-17T07:45:42.528787788Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:42.529517456Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=729.438µs grafana | logger=migrator t=2025-06-17T07:45:42.53286678Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-17T07:45:42.533399455Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=532.155µs grafana | logger=migrator t=2025-06-17T07:45:42.538569228Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-17T07:45:42.539631358Z level=info msg="Migration successfully executed" id="create star table" duration=1.0613ms grafana | logger=migrator t=2025-06-17T07:45:42.543270986Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-17T07:45:42.544291646Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.02008ms grafana | logger=migrator t=2025-06-17T07:45:42.547522449Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-17T07:45:42.549129346Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.606287ms grafana | logger=migrator t=2025-06-17T07:45:42.552607671Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-17T07:45:42.555120137Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.515046ms grafana | logger=migrator t=2025-06-17T07:45:42.559925795Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-17T07:45:42.561752614Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.826499ms grafana | logger=migrator t=2025-06-17T07:45:42.565425391Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-17T07:45:42.566628844Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.202143ms grafana | logger=migrator t=2025-06-17T07:45:42.570359572Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-17T07:45:42.571468574Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.108552ms grafana | logger=migrator t=2025-06-17T07:45:42.575453644Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-17T07:45:42.576438733Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=985.379µs grafana | logger=migrator t=2025-06-17T07:45:42.58094755Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-17T07:45:42.58199175Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.03965ms grafana | logger=migrator t=2025-06-17T07:45:42.586094852Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-17T07:45:42.587036602Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=941.63µs grafana | logger=migrator t=2025-06-17T07:45:42.590746049Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-17T07:45:42.59177551Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.027931ms grafana | logger=migrator t=2025-06-17T07:45:42.595382737Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-17T07:45:42.596370997Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=988.07µs grafana | logger=migrator t=2025-06-17T07:45:42.600731431Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-17T07:45:42.600760962Z level=info msg="Migration successfully executed" id="Update org table charset" duration=29.351µs grafana | logger=migrator t=2025-06-17T07:45:42.604258347Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-17T07:45:42.604285047Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.95µs grafana | logger=migrator t=2025-06-17T07:45:42.608191988Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-17T07:45:42.608882155Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=724.257µs grafana | logger=migrator t=2025-06-17T07:45:42.612740574Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-17T07:45:42.613760445Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.018941ms grafana | logger=migrator t=2025-06-17T07:45:42.618137289Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-17T07:45:42.619417762Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.280263ms grafana | logger=migrator t=2025-06-17T07:45:42.622829177Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-17T07:45:42.623873667Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.04369ms grafana | logger=migrator t=2025-06-17T07:45:42.627564636Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-17T07:45:42.628432664Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=867.728µs grafana | logger=migrator t=2025-06-17T07:45:42.632836558Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-17T07:45:42.634008481Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.170993ms grafana | logger=migrator t=2025-06-17T07:45:42.638369235Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-17T07:45:42.639236354Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=866.789µs grafana | logger=migrator t=2025-06-17T07:45:42.642584828Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-17T07:45:42.648254716Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.668968ms grafana | logger=migrator t=2025-06-17T07:45:42.679366573Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-17T07:45:42.681548386Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=2.184193ms grafana | logger=migrator t=2025-06-17T07:45:42.68595016Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-17T07:45:42.68688195Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=931.78µs grafana | logger=migrator t=2025-06-17T07:45:42.693286445Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-17T07:45:42.694297495Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.00923ms grafana | logger=migrator t=2025-06-17T07:45:42.699463688Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:42.700183236Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=718.708µs grafana | logger=migrator t=2025-06-17T07:45:42.703996975Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-17T07:45:42.705060235Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.06219ms grafana | logger=migrator t=2025-06-17T07:45:42.708623572Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-17T07:45:42.708653332Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=31.67µs grafana | logger=migrator t=2025-06-17T07:45:42.713202098Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-17T07:45:42.716702284Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.498976ms grafana | logger=migrator t=2025-06-17T07:45:42.720349292Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-17T07:45:42.721859516Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.509514ms grafana | logger=migrator t=2025-06-17T07:45:42.725254591Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.726602125Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.346734ms grafana | logger=migrator t=2025-06-17T07:45:42.730786667Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.731595446Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=808.209µs grafana | logger=migrator t=2025-06-17T07:45:42.735015731Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.738983741Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.96558ms grafana | logger=migrator t=2025-06-17T07:45:42.74280816Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.744146884Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.337834ms grafana | logger=migrator t=2025-06-17T07:45:42.748546679Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-17T07:45:42.749399978Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=852.859µs grafana | logger=migrator t=2025-06-17T07:45:42.752857302Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-17T07:45:42.752885733Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=28.641µs grafana | logger=migrator t=2025-06-17T07:45:42.756391329Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-17T07:45:42.75642939Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=38.901µs grafana | logger=migrator t=2025-06-17T07:45:42.760847914Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.764907306Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=4.059002ms grafana | logger=migrator t=2025-06-17T07:45:42.768446582Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.770638164Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.189812ms grafana | logger=migrator t=2025-06-17T07:45:42.773986198Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.776183931Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.199663ms grafana | logger=migrator t=2025-06-17T07:45:42.779466004Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.781629227Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.162173ms grafana | logger=migrator t=2025-06-17T07:45:42.785755598Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.786167112Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=407.134µs grafana | logger=migrator t=2025-06-17T07:45:42.789418105Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:42.790514407Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.096192ms grafana | logger=migrator t=2025-06-17T07:45:42.794224425Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-17T07:45:42.795094103Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=869.058µs grafana | logger=migrator t=2025-06-17T07:45:42.799499568Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-17T07:45:42.799558659Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=59.051µs grafana | logger=migrator t=2025-06-17T07:45:42.802117945Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-17T07:45:42.803090885Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=972.35µs grafana | logger=migrator t=2025-06-17T07:45:42.806466509Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-17T07:45:42.807275018Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=804.109µs grafana | logger=migrator t=2025-06-17T07:45:42.811625242Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:42.817995937Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.369865ms grafana | logger=migrator t=2025-06-17T07:45:42.821934867Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-17T07:45:42.822907687Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=971.83µs grafana | logger=migrator t=2025-06-17T07:45:42.826256272Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-17T07:45:42.827199281Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=942.729µs grafana | logger=migrator t=2025-06-17T07:45:42.831345103Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-17T07:45:42.832349144Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.00374ms grafana | logger=migrator t=2025-06-17T07:45:42.83593356Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:42.836483965Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=548.315µs grafana | logger=migrator t=2025-06-17T07:45:42.840243724Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:42.841176143Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=931.589µs grafana | logger=migrator t=2025-06-17T07:45:42.846103974Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-17T07:45:42.849407907Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.288573ms grafana | logger=migrator t=2025-06-17T07:45:42.853089895Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-17T07:45:42.853999184Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=909.309µs grafana | logger=migrator t=2025-06-17T07:45:42.857601411Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-17T07:45:42.857876044Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=277.313µs grafana | logger=migrator t=2025-06-17T07:45:42.861093686Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-17T07:45:42.861364829Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=270.773µs grafana | logger=migrator t=2025-06-17T07:45:42.865807155Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-17T07:45:42.866778454Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=969.859µs grafana | logger=migrator t=2025-06-17T07:45:42.870026487Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.872723235Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.696108ms grafana | logger=migrator t=2025-06-17T07:45:42.876120099Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.880813887Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=4.679248ms grafana | logger=migrator t=2025-06-17T07:45:42.886010851Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-17T07:45:42.886677537Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=666.746µs grafana | logger=migrator t=2025-06-17T07:45:42.890090892Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-17T07:45:42.892345385Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.252183ms grafana | logger=migrator t=2025-06-17T07:45:42.896272606Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-17T07:45:42.900172405Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=3.899449ms grafana | logger=migrator t=2025-06-17T07:45:42.909040975Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-17T07:45:42.909908794Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=866.869µs grafana | logger=migrator t=2025-06-17T07:45:42.936622187Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-17T07:45:42.942436126Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=5.815319ms grafana | logger=migrator t=2025-06-17T07:45:42.946252304Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-17T07:45:42.946929972Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=677.318µs grafana | logger=migrator t=2025-06-17T07:45:42.950273816Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-17T07:45:42.95070269Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=428.534µs grafana | logger=migrator t=2025-06-17T07:45:42.955194886Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-17T07:45:42.956715091Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.518855ms grafana | logger=migrator t=2025-06-17T07:45:42.960317598Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-17T07:45:42.961958865Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.615916ms grafana | logger=migrator t=2025-06-17T07:45:42.966248599Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-17T07:45:42.967217019Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=967.74µs grafana | logger=migrator t=2025-06-17T07:45:42.971495072Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-17T07:45:42.972358841Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=863.409µs grafana | logger=migrator t=2025-06-17T07:45:42.976102429Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-17T07:45:42.976922417Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=819.808µs grafana | logger=migrator t=2025-06-17T07:45:42.981180171Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-17T07:45:42.988056451Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.87547ms grafana | logger=migrator t=2025-06-17T07:45:42.991624208Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-17T07:45:42.992397826Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=773.237µs grafana | logger=migrator t=2025-06-17T07:45:42.995625538Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-17T07:45:42.996276465Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=650.817µs grafana | logger=migrator t=2025-06-17T07:45:43.000263186Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-17T07:45:43.001186485Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=922.979µs grafana | logger=migrator t=2025-06-17T07:45:43.021426631Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-17T07:45:43.022054427Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=627.226µs grafana | logger=migrator t=2025-06-17T07:45:43.02722289Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-17T07:45:43.02911815Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.894649ms grafana | logger=migrator t=2025-06-17T07:45:43.033851388Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-17T07:45:43.036367194Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.515136ms grafana | logger=migrator t=2025-06-17T07:45:43.03989674Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-17T07:45:43.03992293Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.89µs grafana | logger=migrator t=2025-06-17T07:45:43.043572467Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-17T07:45:43.043764509Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=192.002µs grafana | logger=migrator t=2025-06-17T07:45:43.048598688Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-17T07:45:43.051984352Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.383044ms grafana | logger=migrator t=2025-06-17T07:45:43.05759243Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-17T07:45:43.057845792Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=252.802µs grafana | logger=migrator t=2025-06-17T07:45:43.063128456Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-17T07:45:43.06355506Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=426.984µs grafana | logger=migrator t=2025-06-17T07:45:43.072118128Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-17T07:45:43.074940416Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.824559ms grafana | logger=migrator t=2025-06-17T07:45:43.080694685Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-17T07:45:43.080890087Z level=info msg="Migration successfully executed" id="Update uid value" duration=195.632µs grafana | logger=migrator t=2025-06-17T07:45:43.08507428Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:43.085939988Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=865.488µs grafana | logger=migrator t=2025-06-17T07:45:43.088700797Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-17T07:45:43.089462234Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=760.857µs grafana | logger=migrator t=2025-06-17T07:45:43.094814869Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-17T07:45:43.097329275Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.513435ms grafana | logger=migrator t=2025-06-17T07:45:43.100135173Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-17T07:45:43.101906001Z level=info msg="Migration successfully executed" id="Add api_version column" duration=1.770288ms grafana | logger=migrator t=2025-06-17T07:45:43.105415747Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-17T07:45:43.105429407Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=14.13µs grafana | logger=migrator t=2025-06-17T07:45:43.111317077Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-17T07:45:43.112265077Z level=info msg="Migration successfully executed" id="create api_key table" duration=948.32µs grafana | logger=migrator t=2025-06-17T07:45:43.115554631Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-17T07:45:43.116307538Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=752.767µs grafana | logger=migrator t=2025-06-17T07:45:43.122260509Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-17T07:45:43.123017786Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=756.627µs grafana | logger=migrator t=2025-06-17T07:45:43.12629016Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-17T07:45:43.127057578Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=767.208µs grafana | logger=migrator t=2025-06-17T07:45:43.13127417Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-17T07:45:43.132021558Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=742.888µs grafana | logger=migrator t=2025-06-17T07:45:43.135507224Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-17T07:45:43.136218381Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=711.097µs grafana | logger=migrator t=2025-06-17T07:45:43.144214332Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-17T07:45:43.144955759Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=740.867µs grafana | logger=migrator t=2025-06-17T07:45:43.148542867Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-17T07:45:43.15964445Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.096853ms grafana | logger=migrator t=2025-06-17T07:45:43.166288777Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-17T07:45:43.166888613Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=600.246µs grafana | logger=migrator t=2025-06-17T07:45:43.175847704Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-17T07:45:43.177080857Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.232933ms grafana | logger=migrator t=2025-06-17T07:45:43.181515592Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-17T07:45:43.182300531Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=784.609µs grafana | logger=migrator t=2025-06-17T07:45:43.185240281Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-17T07:45:43.186043279Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=805.437µs grafana | logger=migrator t=2025-06-17T07:45:43.192197531Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:43.192734336Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=536.515µs grafana | logger=migrator t=2025-06-17T07:45:43.195929189Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-17T07:45:43.196433175Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=503.816µs grafana | logger=migrator t=2025-06-17T07:45:43.199520206Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-17T07:45:43.199545116Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.25µs grafana | logger=migrator t=2025-06-17T07:45:43.204695369Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-17T07:45:43.207697019Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.00072ms grafana | logger=migrator t=2025-06-17T07:45:43.210928292Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-17T07:45:43.213912582Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.94524ms grafana | logger=migrator t=2025-06-17T07:45:43.21761877Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-17T07:45:43.217846652Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=228.132µs grafana | logger=migrator t=2025-06-17T07:45:43.222249897Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-17T07:45:43.225110157Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.85914ms grafana | logger=migrator t=2025-06-17T07:45:43.230613843Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-17T07:45:43.232962647Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.353434ms grafana | logger=migrator t=2025-06-17T07:45:43.236563613Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-17T07:45:43.237093638Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=530.105µs grafana | logger=migrator t=2025-06-17T07:45:43.240822417Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-17T07:45:43.241390313Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=567.796µs grafana | logger=migrator t=2025-06-17T07:45:43.246788188Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-17T07:45:43.247599206Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=810.818µs grafana | logger=migrator t=2025-06-17T07:45:43.251387154Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-17T07:45:43.252189153Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=801.959µs grafana | logger=migrator t=2025-06-17T07:45:43.2559207Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-17T07:45:43.256706898Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=786.058µs grafana | logger=migrator t=2025-06-17T07:45:43.263039222Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-17T07:45:43.263906032Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=863.07µs grafana | logger=migrator t=2025-06-17T07:45:43.268951094Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-17T07:45:43.268968834Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.6µs grafana | logger=migrator t=2025-06-17T07:45:43.275319478Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-17T07:45:43.275343448Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.85µs grafana | logger=migrator t=2025-06-17T07:45:43.27947008Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-17T07:45:43.282293209Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.822479ms grafana | logger=migrator t=2025-06-17T07:45:43.291350752Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-17T07:45:43.29410914Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.758748ms grafana | logger=migrator t=2025-06-17T07:45:43.297951349Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-17T07:45:43.297966919Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=15.57µs grafana | logger=migrator t=2025-06-17T07:45:43.301788568Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-17T07:45:43.302291823Z level=info msg="Migration successfully executed" id="create quota table v1" duration=502.745µs grafana | logger=migrator t=2025-06-17T07:45:43.308509927Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-17T07:45:43.309798839Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.288042ms grafana | logger=migrator t=2025-06-17T07:45:43.314192554Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-17T07:45:43.314237495Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=44.621µs grafana | logger=migrator t=2025-06-17T07:45:43.318542938Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-17T07:45:43.319793641Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.250393ms grafana | logger=migrator t=2025-06-17T07:45:43.326082375Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-17T07:45:43.326891473Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=808.058µs grafana | logger=migrator t=2025-06-17T07:45:43.330684343Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-17T07:45:43.33539765Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.712558ms grafana | logger=migrator t=2025-06-17T07:45:43.340011767Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-17T07:45:43.340036767Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.64µs grafana | logger=migrator t=2025-06-17T07:45:43.343684624Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-17T07:45:43.344035508Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=308.533µs grafana | logger=migrator t=2025-06-17T07:45:43.347683636Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-17T07:45:43.358452255Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.768519ms grafana | logger=migrator t=2025-06-17T07:45:43.36483426Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-17T07:45:43.365685049Z level=info msg="Migration successfully executed" id="create session table" duration=849.889µs grafana | logger=migrator t=2025-06-17T07:45:43.369850772Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-17T07:45:43.369954993Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=105.161µs grafana | logger=migrator t=2025-06-17T07:45:43.37559226Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-17T07:45:43.375678731Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=86.161µs grafana | logger=migrator t=2025-06-17T07:45:43.379772122Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-17T07:45:43.380299287Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=531.095µs grafana | logger=migrator t=2025-06-17T07:45:43.384187607Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-17T07:45:43.384700312Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=511.775µs grafana | logger=migrator t=2025-06-17T07:45:43.392229459Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-17T07:45:43.39225495Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.301µs grafana | logger=migrator t=2025-06-17T07:45:43.396017008Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-17T07:45:43.396038128Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=21.36µs grafana | logger=migrator t=2025-06-17T07:45:43.399728335Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-17T07:45:43.402920368Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.191503ms grafana | logger=migrator t=2025-06-17T07:45:43.439713733Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-17T07:45:43.444680393Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.96608ms grafana | logger=migrator t=2025-06-17T07:45:43.453128119Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-17T07:45:43.45321639Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=88.661µs grafana | logger=migrator t=2025-06-17T07:45:43.457255552Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-17T07:45:43.457337123Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=81.381µs grafana | logger=migrator t=2025-06-17T07:45:43.461287672Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-17T07:45:43.462140372Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=852.68µs grafana | logger=migrator t=2025-06-17T07:45:43.467533166Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-17T07:45:43.467562946Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=29.9µs grafana | logger=migrator t=2025-06-17T07:45:43.471338035Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-17T07:45:43.474536768Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.198713ms grafana | logger=migrator t=2025-06-17T07:45:43.477584389Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-17T07:45:43.47775206Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=167.211µs grafana | logger=migrator t=2025-06-17T07:45:43.480795521Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-17T07:45:43.483947823Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.151642ms grafana | logger=migrator t=2025-06-17T07:45:43.490242878Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-17T07:45:43.494234228Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.02373ms grafana | logger=migrator t=2025-06-17T07:45:43.49831369Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-17T07:45:43.4983321Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=18.64µs grafana | logger=migrator t=2025-06-17T07:45:43.501680055Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-17T07:45:43.502486392Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=806.067µs grafana | logger=migrator t=2025-06-17T07:45:43.509077849Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-17T07:45:43.510972559Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.89467ms grafana | logger=migrator t=2025-06-17T07:45:43.51598988Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-17T07:45:43.518114342Z level=info msg="Migration successfully executed" id="create alert table v1" duration=2.124002ms grafana | logger=migrator t=2025-06-17T07:45:43.523535016Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-17T07:45:43.524377826Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=842.54µs grafana | logger=migrator t=2025-06-17T07:45:43.528902431Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-17T07:45:43.5297032Z level=info msg="Migration successfully executed" id="add index alert state" duration=795.729µs grafana | logger=migrator t=2025-06-17T07:45:43.533893032Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-17T07:45:43.53469739Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=803.228µs grafana | logger=migrator t=2025-06-17T07:45:43.539201456Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-17T07:45:43.540261437Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.053621ms grafana | logger=migrator t=2025-06-17T07:45:43.545607801Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-17T07:45:43.547483381Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.87312ms grafana | logger=migrator t=2025-06-17T07:45:43.552099478Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-17T07:45:43.553371611Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.270883ms grafana | logger=migrator t=2025-06-17T07:45:43.557791436Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-17T07:45:43.570516805Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.723499ms grafana | logger=migrator t=2025-06-17T07:45:43.576072672Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-17T07:45:43.57691648Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=843.178µs grafana | logger=migrator t=2025-06-17T07:45:43.581270635Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-17T07:45:43.582170094Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=898.479µs grafana | logger=migrator t=2025-06-17T07:45:43.588394528Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:43.588669551Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=274.863µs grafana | logger=migrator t=2025-06-17T07:45:43.593064795Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-17T07:45:43.593940985Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=874.48µs grafana | logger=migrator t=2025-06-17T07:45:43.59842956Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-17T07:45:43.599641483Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.210393ms grafana | logger=migrator t=2025-06-17T07:45:43.603486571Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-17T07:45:43.608059248Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.571827ms grafana | logger=migrator t=2025-06-17T07:45:43.612066259Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-17T07:45:43.615715516Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.648407ms grafana | logger=migrator t=2025-06-17T07:45:43.619788838Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-17T07:45:43.623484805Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.695137ms grafana | logger=migrator t=2025-06-17T07:45:43.626673147Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-17T07:45:43.63079533Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.121023ms grafana | logger=migrator t=2025-06-17T07:45:43.638491367Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-17T07:45:43.639535939Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.046322ms grafana | logger=migrator t=2025-06-17T07:45:43.642975823Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-17T07:45:43.643004374Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.801µs grafana | logger=migrator t=2025-06-17T07:45:43.647970915Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-17T07:45:43.648008355Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=38.91µs grafana | logger=migrator t=2025-06-17T07:45:43.654200478Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-17T07:45:43.656935426Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=2.734438ms grafana | logger=migrator t=2025-06-17T07:45:43.662636434Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-17T07:45:43.663611683Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=974.869µs grafana | logger=migrator t=2025-06-17T07:45:43.668832687Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-17T07:45:43.669647606Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=814.249µs grafana | logger=migrator t=2025-06-17T07:45:43.702293878Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-17T07:45:43.703659222Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.368904ms grafana | logger=migrator t=2025-06-17T07:45:43.710581982Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-17T07:45:43.712194299Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.612207ms grafana | logger=migrator t=2025-06-17T07:45:43.716663844Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-17T07:45:43.720701256Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.036662ms grafana | logger=migrator t=2025-06-17T07:45:43.725348062Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-17T07:45:43.729217373Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.86267ms grafana | logger=migrator t=2025-06-17T07:45:43.735256824Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-17T07:45:43.735457406Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=199.882µs grafana | logger=migrator t=2025-06-17T07:45:43.739663599Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:43.741122673Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.458804ms grafana | logger=migrator t=2025-06-17T07:45:43.913970615Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-17T07:45:43.919110797Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=5.143142ms grafana | logger=migrator t=2025-06-17T07:45:44.067149954Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-17T07:45:44.073963094Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.81494ms grafana | logger=migrator t=2025-06-17T07:45:44.077367548Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-17T07:45:44.077386609Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=19.851µs grafana | logger=migrator t=2025-06-17T07:45:44.081607752Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-17T07:45:44.083116747Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.509345ms grafana | logger=migrator t=2025-06-17T07:45:44.086327129Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-17T07:45:44.087528722Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.202733ms grafana | logger=migrator t=2025-06-17T07:45:44.090089168Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-17T07:45:44.090182869Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=95.991µs grafana | logger=migrator t=2025-06-17T07:45:44.094893157Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-17T07:45:44.097575114Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=2.681517ms grafana | logger=migrator t=2025-06-17T07:45:44.102943528Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-17T07:45:44.104312883Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.369675ms grafana | logger=migrator t=2025-06-17T07:45:44.107695457Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-17T07:45:44.108718047Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.02269ms grafana | logger=migrator t=2025-06-17T07:45:44.114444576Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-17T07:45:44.115327185Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=882.859µs grafana | logger=migrator t=2025-06-17T07:45:44.119343036Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-17T07:45:44.120036732Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=692.116µs grafana | logger=migrator t=2025-06-17T07:45:44.123891192Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-17T07:45:44.124758171Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=866.329µs grafana | logger=migrator t=2025-06-17T07:45:44.129081845Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-17T07:45:44.129106986Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.701µs grafana | logger=migrator t=2025-06-17T07:45:44.133066285Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.14627307Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=13.202565ms grafana | logger=migrator t=2025-06-17T07:45:44.151537214Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-17T07:45:44.152301141Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=763.467µs grafana | logger=migrator t=2025-06-17T07:45:44.155962938Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.161086591Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.122643ms grafana | logger=migrator t=2025-06-17T07:45:44.165237224Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-17T07:45:44.167407555Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=2.170702ms grafana | logger=migrator t=2025-06-17T07:45:44.172595968Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-17T07:45:44.173668359Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.066961ms grafana | logger=migrator t=2025-06-17T07:45:44.208643925Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-17T07:45:44.210277722Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.635297ms grafana | logger=migrator t=2025-06-17T07:45:44.213681436Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-17T07:45:44.227187524Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.506288ms grafana | logger=migrator t=2025-06-17T07:45:44.230185924Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-17T07:45:44.230705829Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=519.815µs grafana | logger=migrator t=2025-06-17T07:45:44.237357047Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-17T07:45:44.239025814Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.669147ms grafana | logger=migrator t=2025-06-17T07:45:44.243454639Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-17T07:45:44.243660972Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=206.203µs grafana | logger=migrator t=2025-06-17T07:45:44.247489Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-17T07:45:44.248301488Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=818.538µs grafana | logger=migrator t=2025-06-17T07:45:44.253260649Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-17T07:45:44.253576412Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=314.973µs grafana | logger=migrator t=2025-06-17T07:45:44.257117608Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.261406282Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.288384ms grafana | logger=migrator t=2025-06-17T07:45:44.265973128Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.275255483Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=9.271285ms grafana | logger=migrator t=2025-06-17T07:45:44.279746648Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.280751759Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.005371ms grafana | logger=migrator t=2025-06-17T07:45:44.285998712Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.286940492Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=940.34µs grafana | logger=migrator t=2025-06-17T07:45:44.290087244Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-17T07:45:44.290330936Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=243.712µs grafana | logger=migrator t=2025-06-17T07:45:44.293545929Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-17T07:45:44.297137356Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.594657ms grafana | logger=migrator t=2025-06-17T07:45:44.301587771Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-17T07:45:44.302269187Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=681.236µs grafana | logger=migrator t=2025-06-17T07:45:44.305530161Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-17T07:45:44.305742643Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=212.332µs grafana | logger=migrator t=2025-06-17T07:45:44.31125984Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-17T07:45:44.311712714Z level=info msg="Migration successfully executed" id="Move region to single row" duration=452.254µs grafana | logger=migrator t=2025-06-17T07:45:44.315012958Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.315807065Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=791.518µs grafana | logger=migrator t=2025-06-17T07:45:44.319766466Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.320569384Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=800.808µs grafana | logger=migrator t=2025-06-17T07:45:44.323632505Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.324528594Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=895.719µs grafana | logger=migrator t=2025-06-17T07:45:44.328598326Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.329485165Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=886.549µs grafana | logger=migrator t=2025-06-17T07:45:44.33294529Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.333745418Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=800.088µs grafana | logger=migrator t=2025-06-17T07:45:44.33690207Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-17T07:45:44.338031121Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.128751ms grafana | logger=migrator t=2025-06-17T07:45:44.342695489Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-17T07:45:44.34271484Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=20.001µs grafana | logger=migrator t=2025-06-17T07:45:44.348204356Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-17T07:45:44.348221726Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=17.77µs grafana | logger=migrator t=2025-06-17T07:45:44.356344058Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-17T07:45:44.356386919Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=47.031µs grafana | logger=migrator t=2025-06-17T07:45:44.363720663Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-17T07:45:44.364977726Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.260563ms grafana | logger=migrator t=2025-06-17T07:45:44.37228523Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-17T07:45:44.373131579Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=846.679µs grafana | logger=migrator t=2025-06-17T07:45:44.377372232Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-17T07:45:44.378330092Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=954.88µs grafana | logger=migrator t=2025-06-17T07:45:44.381272492Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-17T07:45:44.382212422Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=939.31µs grafana | logger=migrator t=2025-06-17T07:45:44.385098661Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-17T07:45:44.385314463Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=215.622µs grafana | logger=migrator t=2025-06-17T07:45:44.389559567Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-17T07:45:44.389994221Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=434.484µs grafana | logger=migrator t=2025-06-17T07:45:44.392782449Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-17T07:45:44.3928021Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=23.371µs grafana | logger=migrator t=2025-06-17T07:45:44.395641348Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-17T07:45:44.400296605Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.654557ms grafana | logger=migrator t=2025-06-17T07:45:44.404993203Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-17T07:45:44.405787221Z level=info msg="Migration successfully executed" id="create team table" duration=793.898µs grafana | logger=migrator t=2025-06-17T07:45:44.408844302Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-17T07:45:44.409745362Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=874.09µs grafana | logger=migrator t=2025-06-17T07:45:44.412796893Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-17T07:45:44.413697532Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=899.709µs grafana | logger=migrator t=2025-06-17T07:45:44.418275289Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-17T07:45:44.422986117Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.709868ms grafana | logger=migrator t=2025-06-17T07:45:44.425897347Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-17T07:45:44.426083748Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=185.272µs grafana | logger=migrator t=2025-06-17T07:45:44.428975378Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:44.429880957Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=905.179µs grafana | logger=migrator t=2025-06-17T07:45:44.433027149Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-17T07:45:44.440598206Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=7.570947ms grafana | logger=migrator t=2025-06-17T07:45:44.463127245Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-17T07:45:44.469352728Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=6.225163ms grafana | logger=migrator t=2025-06-17T07:45:44.47437349Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-17T07:45:44.476783274Z level=info msg="Migration successfully executed" id="create team member table" duration=2.398794ms grafana | logger=migrator t=2025-06-17T07:45:44.48032886Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-17T07:45:44.481482422Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.154152ms grafana | logger=migrator t=2025-06-17T07:45:44.486553683Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-17T07:45:44.488214781Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.623187ms grafana | logger=migrator t=2025-06-17T07:45:44.494822128Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-17T07:45:44.496682977Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.861089ms grafana | logger=migrator t=2025-06-17T07:45:44.502591927Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-17T07:45:44.507604808Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.012341ms grafana | logger=migrator t=2025-06-17T07:45:44.515817392Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-17T07:45:44.523426449Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.607537ms grafana | logger=migrator t=2025-06-17T07:45:44.527067556Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-17T07:45:44.53042981Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.361464ms grafana | logger=migrator t=2025-06-17T07:45:44.534637073Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-17T07:45:44.535620733Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=980.42µs grafana | logger=migrator t=2025-06-17T07:45:44.538965437Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-17T07:45:44.539937116Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=971.049µs grafana | logger=migrator t=2025-06-17T07:45:44.54328134Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-17T07:45:44.552402634Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=9.120534ms grafana | logger=migrator t=2025-06-17T07:45:44.557991511Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-17T07:45:44.559134992Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.143171ms grafana | logger=migrator t=2025-06-17T07:45:44.562558137Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-17T07:45:44.563672569Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.111362ms grafana | logger=migrator t=2025-06-17T07:45:44.567063473Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-17T07:45:44.568083513Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.01962ms grafana | logger=migrator t=2025-06-17T07:45:44.572079344Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-17T07:45:44.573150555Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.070801ms grafana | logger=migrator t=2025-06-17T07:45:44.576557269Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-17T07:45:44.577598561Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.040912ms grafana | logger=migrator t=2025-06-17T07:45:44.580991725Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-17T07:45:44.582023315Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.03431ms grafana | logger=migrator t=2025-06-17T07:45:44.586116167Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-17T07:45:44.586715183Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=601.336µs grafana | logger=migrator t=2025-06-17T07:45:44.589922836Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-17T07:45:44.590258089Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=334.644µs grafana | logger=migrator t=2025-06-17T07:45:44.595365831Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-17T07:45:44.596218069Z level=info msg="Migration successfully executed" id="create tag table" duration=851.418µs grafana | logger=migrator t=2025-06-17T07:45:44.599404582Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-17T07:45:44.600364852Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=960.09µs grafana | logger=migrator t=2025-06-17T07:45:44.603619575Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-17T07:45:44.604489274Z level=info msg="Migration successfully executed" id="create login attempt table" duration=866.059µs grafana | logger=migrator t=2025-06-17T07:45:44.608355394Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-17T07:45:44.609383504Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.02609ms grafana | logger=migrator t=2025-06-17T07:45:44.612520446Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-17T07:45:44.613567147Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.045951ms grafana | logger=migrator t=2025-06-17T07:45:44.616665818Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:44.63161184Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.936522ms grafana | logger=migrator t=2025-06-17T07:45:44.638267617Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-17T07:45:44.639591331Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.324874ms grafana | logger=migrator t=2025-06-17T07:45:44.642709063Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-17T07:45:44.643866254Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.157121ms grafana | logger=migrator t=2025-06-17T07:45:44.648198759Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:44.648674094Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=472.205µs grafana | logger=migrator t=2025-06-17T07:45:44.653675275Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:44.654563423Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=887.038µs grafana | logger=migrator t=2025-06-17T07:45:44.659182501Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-17T07:45:44.660066179Z level=info msg="Migration successfully executed" id="create user auth table" duration=883.138µs grafana | logger=migrator t=2025-06-17T07:45:44.665665007Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-17T07:45:44.667074521Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.408784ms grafana | logger=migrator t=2025-06-17T07:45:44.775615405Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-17T07:45:44.775657176Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=45.311µs grafana | logger=migrator t=2025-06-17T07:45:44.780810248Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.789241724Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.429706ms grafana | logger=migrator t=2025-06-17T07:45:44.793940812Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.800779142Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.83589ms grafana | logger=migrator t=2025-06-17T07:45:44.805416399Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.811877495Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.453776ms grafana | logger=migrator t=2025-06-17T07:45:44.815557823Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.819603474Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.048261ms grafana | logger=migrator t=2025-06-17T07:45:44.824272401Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.825408822Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.137711ms grafana | logger=migrator t=2025-06-17T07:45:44.828889008Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.8379085Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.015882ms grafana | logger=migrator t=2025-06-17T07:45:44.844356356Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-17T07:45:44.851525518Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=7.170333ms grafana | logger=migrator t=2025-06-17T07:45:44.855025924Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-17T07:45:44.855848852Z level=info msg="Migration successfully executed" id="create server_lock table" duration=822.378µs grafana | logger=migrator t=2025-06-17T07:45:44.860654362Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-17T07:45:44.861992235Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.337343ms grafana | logger=migrator t=2025-06-17T07:45:44.865550791Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-17T07:45:44.867312639Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.761048ms grafana | logger=migrator t=2025-06-17T07:45:44.871991377Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-17T07:45:44.873074748Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.082811ms grafana | logger=migrator t=2025-06-17T07:45:44.876172199Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-17T07:45:44.877864777Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.691257ms grafana | logger=migrator t=2025-06-17T07:45:44.881559234Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-17T07:45:44.883296532Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.736678ms grafana | logger=migrator t=2025-06-17T07:45:44.888364924Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-17T07:45:44.898055822Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.713718ms grafana | logger=migrator t=2025-06-17T07:45:44.904315505Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-17T07:45:44.905896602Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.581837ms grafana | logger=migrator t=2025-06-17T07:45:44.91162155Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-17T07:45:44.918177987Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=6.551336ms grafana | logger=migrator t=2025-06-17T07:45:44.949411735Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-17T07:45:44.951483526Z level=info msg="Migration successfully executed" id="create cache_data table" duration=2.069451ms grafana | logger=migrator t=2025-06-17T07:45:44.958647069Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-17T07:45:44.959910352Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.266303ms grafana | logger=migrator t=2025-06-17T07:45:45.058188182Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-17T07:45:45.060880399Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=2.693577ms grafana | logger=migrator t=2025-06-17T07:45:45.065448856Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-17T07:45:45.067471637Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.024521ms grafana | logger=migrator t=2025-06-17T07:45:45.071243855Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-17T07:45:45.071267565Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=24.21µs grafana | logger=migrator t=2025-06-17T07:45:45.075937653Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-17T07:45:45.076046254Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=108.981µs grafana | logger=migrator t=2025-06-17T07:45:45.084296358Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-17T07:45:45.085447299Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.150881ms grafana | logger=migrator t=2025-06-17T07:45:45.091909734Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-17T07:45:45.093844644Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.93791ms grafana | logger=migrator t=2025-06-17T07:45:45.099635213Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-17T07:45:45.100832525Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.200192ms grafana | logger=migrator t=2025-06-17T07:45:45.105786695Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-17T07:45:45.105810756Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=25.68µs grafana | logger=migrator t=2025-06-17T07:45:45.11012883Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-17T07:45:45.11114696Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.01787ms grafana | logger=migrator t=2025-06-17T07:45:45.114197281Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-17T07:45:45.115154611Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=953.48µs grafana | logger=migrator t=2025-06-17T07:45:45.118299903Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-17T07:45:45.119281353Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=980.76µs grafana | logger=migrator t=2025-06-17T07:45:45.12488355Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-17T07:45:45.126492627Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.607907ms grafana | logger=migrator t=2025-06-17T07:45:45.133121454Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-17T07:45:45.141005924Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.88918ms grafana | logger=migrator t=2025-06-17T07:45:45.144715582Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-17T07:45:45.145385858Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=670.036µs grafana | logger=migrator t=2025-06-17T07:45:45.149772393Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-17T07:45:45.149859994Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=88.421µs grafana | logger=migrator t=2025-06-17T07:45:45.153091227Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-17T07:45:45.154019226Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=927.879µs grafana | logger=migrator t=2025-06-17T07:45:45.157201189Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-17T07:45:45.158192599Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=991.08µs grafana | logger=migrator t=2025-06-17T07:45:45.164457232Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-17T07:45:45.166331431Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.876559ms grafana | logger=migrator t=2025-06-17T07:45:45.171431484Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-17T07:45:45.171460694Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=31.171µs grafana | logger=migrator t=2025-06-17T07:45:45.173821158Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-17T07:45:45.174812988Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=991.93µs grafana | logger=migrator t=2025-06-17T07:45:45.180833699Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-17T07:45:45.181841599Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.00765ms grafana | logger=migrator t=2025-06-17T07:45:45.185214544Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-17T07:45:45.186407696Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.192102ms grafana | logger=migrator t=2025-06-17T07:45:45.19279365Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-17T07:45:45.193775971Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=978.121µs grafana | logger=migrator t=2025-06-17T07:45:45.197955603Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.203769172Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.815199ms grafana | logger=migrator t=2025-06-17T07:45:45.208262218Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.209162397Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=900.069µs grafana | logger=migrator t=2025-06-17T07:45:45.286365833Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.28819877Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.835887ms grafana | logger=migrator t=2025-06-17T07:45:45.312481518Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.341699384Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.219376ms grafana | logger=migrator t=2025-06-17T07:45:45.344984708Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.374082454Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.097696ms grafana | logger=migrator t=2025-06-17T07:45:45.381124286Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.382751102Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.626196ms grafana | logger=migrator t=2025-06-17T07:45:45.386206357Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.387795144Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.587947ms grafana | logger=migrator t=2025-06-17T07:45:45.392537182Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-17T07:45:45.398318511Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.776619ms grafana | logger=migrator t=2025-06-17T07:45:45.40618272Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-17T07:45:45.412494335Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.310275ms grafana | logger=migrator t=2025-06-17T07:45:45.449557452Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:45.451454382Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.891829ms grafana | logger=migrator t=2025-06-17T07:45:45.455464162Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-17T07:45:45.456774506Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.311174ms grafana | logger=migrator t=2025-06-17T07:45:45.462459274Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-17T07:45:45.463407053Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=947.239µs grafana | logger=migrator t=2025-06-17T07:45:45.467726486Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-17T07:45:45.468657176Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=930.54µs grafana | logger=migrator t=2025-06-17T07:45:45.472095961Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-17T07:45:45.472124791Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=30.27µs grafana | logger=migrator t=2025-06-17T07:45:45.477632217Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.483355506Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.726918ms grafana | logger=migrator t=2025-06-17T07:45:45.489234825Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.495879893Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.644108ms grafana | logger=migrator t=2025-06-17T07:45:45.499635781Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.504750953Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.097832ms grafana | logger=migrator t=2025-06-17T07:45:45.508968596Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-17T07:45:45.510482922Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.517486ms grafana | logger=migrator t=2025-06-17T07:45:45.516134159Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-17T07:45:45.518121149Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.98659ms grafana | logger=migrator t=2025-06-17T07:45:45.522451883Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.528776257Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.323124ms grafana | logger=migrator t=2025-06-17T07:45:45.532817539Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.538372075Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.553106ms grafana | logger=migrator t=2025-06-17T07:45:45.54185096Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-17T07:45:45.542999252Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.147802ms grafana | logger=migrator t=2025-06-17T07:45:45.562761233Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:45.57136363Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.603357ms grafana | logger=migrator t=2025-06-17T07:45:45.575969287Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:45.582731567Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.741989ms grafana | logger=migrator t=2025-06-17T07:45:45.586994099Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:45.58701562Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=22.531µs grafana | logger=migrator t=2025-06-17T07:45:45.591529956Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:45.592835839Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.306403ms grafana | logger=migrator t=2025-06-17T07:45:45.598578497Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-17T07:45:45.599871851Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.298864ms grafana | logger=migrator t=2025-06-17T07:45:45.604661649Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-17T07:45:45.605845481Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.183472ms grafana | logger=migrator t=2025-06-17T07:45:45.639878088Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-17T07:45:45.639925758Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=51.211µs grafana | logger=migrator t=2025-06-17T07:45:45.6441597Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:45.649225762Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.066102ms grafana | logger=migrator t=2025-06-17T07:45:45.654386095Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:45.659012392Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.625997ms grafana | logger=migrator t=2025-06-17T07:45:45.662438127Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:45.671514949Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=9.075622ms grafana | logger=migrator t=2025-06-17T07:45:45.676426069Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:45.685240838Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=8.813359ms grafana | logger=migrator t=2025-06-17T07:45:45.689633814Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-17T07:45:45.696771506Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.133422ms grafana | logger=migrator t=2025-06-17T07:45:45.700763517Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:45.700784917Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=22.22µs grafana | logger=migrator t=2025-06-17T07:45:45.704755887Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-17T07:45:45.705455844Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=699.867µs grafana | logger=migrator t=2025-06-17T07:45:45.711661538Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-17T07:45:45.72369475Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=12.034202ms grafana | logger=migrator t=2025-06-17T07:45:45.727269606Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-17T07:45:45.727307116Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=37.69µs grafana | logger=migrator t=2025-06-17T07:45:45.732134316Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-17T07:45:45.738139236Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.00344ms grafana | logger=migrator t=2025-06-17T07:45:45.844040584Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-17T07:45:45.846142595Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.103921ms grafana | logger=migrator t=2025-06-17T07:45:45.914224337Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-17T07:45:45.919205818Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.983531ms grafana | logger=migrator t=2025-06-17T07:45:45.968859893Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-17T07:45:45.970005665Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.147852ms grafana | logger=migrator t=2025-06-17T07:45:46.037109147Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-17T07:45:46.039109937Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=2.00222ms grafana | logger=migrator t=2025-06-17T07:45:46.053130089Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-17T07:45:46.061017919Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.90346ms grafana | logger=migrator t=2025-06-17T07:45:46.125286373Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-17T07:45:46.127197532Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.911879ms grafana | logger=migrator t=2025-06-17T07:45:46.13290175Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-17T07:45:46.134511296Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.611716ms grafana | logger=migrator t=2025-06-17T07:45:46.137543988Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-17T07:45:46.138597658Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.05294ms grafana | logger=migrator t=2025-06-17T07:45:46.141627289Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-17T07:45:46.14274853Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.122361ms grafana | logger=migrator t=2025-06-17T07:45:46.14663303Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-17T07:45:46.1466514Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=18.01µs grafana | logger=migrator t=2025-06-17T07:45:46.149573209Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-17T07:45:46.15065578Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.082191ms grafana | logger=migrator t=2025-06-17T07:45:46.15355688Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-17T07:45:46.154754293Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.197213ms grafana | logger=migrator t=2025-06-17T07:45:46.158855034Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-17T07:45:46.159252268Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-17T07:45:46.162117457Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-17T07:45:46.162556031Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=438.324µs grafana | logger=migrator t=2025-06-17T07:45:46.166159308Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-17T07:45:46.16737608Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.216842ms grafana | logger=migrator t=2025-06-17T07:45:46.17323439Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-17T07:45:46.182883638Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.646138ms grafana | logger=migrator t=2025-06-17T07:45:46.186852388Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-17T07:45:46.187811109Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=958.001µs grafana | logger=migrator t=2025-06-17T07:45:46.191311544Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-17T07:45:46.192119152Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=807.768µs grafana | logger=migrator t=2025-06-17T07:45:46.197625108Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-17T07:45:46.198571637Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=945.839µs grafana | logger=migrator t=2025-06-17T07:45:46.201222394Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-17T07:45:46.202356766Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.134032ms grafana | logger=migrator t=2025-06-17T07:45:46.205146575Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:46.206237405Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.08976ms grafana | logger=migrator t=2025-06-17T07:45:46.211212086Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-17T07:45:46.211312187Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=106.081µs grafana | logger=migrator t=2025-06-17T07:45:46.217181567Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-17T07:45:46.217202447Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=21.63µs grafana | logger=migrator t=2025-06-17T07:45:46.219943105Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-17T07:45:46.2273424Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=7.395225ms grafana | logger=migrator t=2025-06-17T07:45:46.231597443Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-17T07:45:46.231859235Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=261.662µs grafana | logger=migrator t=2025-06-17T07:45:46.235470433Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-17T07:45:46.236277461Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=806.298µs grafana | logger=migrator t=2025-06-17T07:45:46.239087909Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-17T07:45:46.239372872Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=285.093µs grafana | logger=migrator t=2025-06-17T07:45:46.242371422Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-17T07:45:46.243353082Z level=info msg="Migration successfully executed" id="create data_keys table" duration=981.23µs grafana | logger=migrator t=2025-06-17T07:45:46.247514775Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-17T07:45:46.248334203Z level=info msg="Migration successfully executed" id="create secrets table" duration=823.598µs grafana | logger=migrator t=2025-06-17T07:45:46.251820758Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-17T07:45:46.288189108Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.36576ms grafana | logger=migrator t=2025-06-17T07:45:46.291762275Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-17T07:45:46.297916687Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.153992ms grafana | logger=migrator t=2025-06-17T07:45:46.302036099Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-17T07:45:46.30218233Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=146.451µs grafana | logger=migrator t=2025-06-17T07:45:46.307790137Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-17T07:45:46.340673401Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.883264ms grafana | logger=migrator t=2025-06-17T07:45:46.379083782Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-17T07:45:46.419976147Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=40.889525ms grafana | logger=migrator t=2025-06-17T07:45:46.425924448Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-17T07:45:46.426756756Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=835.418µs grafana | logger=migrator t=2025-06-17T07:45:46.430323732Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-17T07:45:46.431211381Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=887.569µs grafana | logger=migrator t=2025-06-17T07:45:46.434693577Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-17T07:45:46.435195372Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=501.145µs grafana | logger=migrator t=2025-06-17T07:45:46.440249203Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-17T07:45:46.441257543Z level=info msg="Migration successfully executed" id="create permission table" duration=1.00802ms grafana | logger=migrator t=2025-06-17T07:45:46.444754318Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-17T07:45:46.446085063Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.330474ms grafana | logger=migrator t=2025-06-17T07:45:46.449472217Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-17T07:45:46.450624128Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.153631ms grafana | logger=migrator t=2025-06-17T07:45:46.456015923Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-17T07:45:46.457238486Z level=info msg="Migration successfully executed" id="create role table" duration=1.222043ms grafana | logger=migrator t=2025-06-17T07:45:46.460861002Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-17T07:45:46.469575721Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.713619ms grafana | logger=migrator t=2025-06-17T07:45:46.474165078Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-17T07:45:46.482279651Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.099863ms grafana | logger=migrator t=2025-06-17T07:45:46.487997909Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-17T07:45:46.490334962Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.336643ms grafana | logger=migrator t=2025-06-17T07:45:46.4951183Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-17T07:45:46.496249603Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.134433ms grafana | logger=migrator t=2025-06-17T07:45:46.499543366Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:46.500666547Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.122851ms grafana | logger=migrator t=2025-06-17T07:45:46.506389605Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-17T07:45:46.508332625Z level=info msg="Migration successfully executed" id="create team role table" duration=1.9422ms grafana | logger=migrator t=2025-06-17T07:45:46.515126823Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-17T07:45:46.516986663Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.85982ms grafana | logger=migrator t=2025-06-17T07:45:46.520437498Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-17T07:45:46.52266953Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.229892ms grafana | logger=migrator t=2025-06-17T07:45:46.52663302Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-17T07:45:46.527866133Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.232813ms grafana | logger=migrator t=2025-06-17T07:45:46.531580361Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-17T07:45:46.532871465Z level=info msg="Migration successfully executed" id="create user role table" duration=1.290954ms grafana | logger=migrator t=2025-06-17T07:45:46.536972816Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-17T07:45:46.539442331Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.470805ms grafana | logger=migrator t=2025-06-17T07:45:46.54427745Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-17T07:45:46.545516712Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.238742ms grafana | logger=migrator t=2025-06-17T07:45:46.548958657Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-17T07:45:46.551030159Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.070182ms grafana | logger=migrator t=2025-06-17T07:45:46.554425313Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-17T07:45:46.555372953Z level=info msg="Migration successfully executed" id="create builtin role table" duration=947.45µs grafana | logger=migrator t=2025-06-17T07:45:46.559749967Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-17T07:45:46.560823688Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.073281ms grafana | logger=migrator t=2025-06-17T07:45:46.564290664Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-17T07:45:46.565424635Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.133591ms grafana | logger=migrator t=2025-06-17T07:45:46.568952001Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-17T07:45:46.577388747Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.435976ms grafana | logger=migrator t=2025-06-17T07:45:46.58169762Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-17T07:45:46.582592049Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=893.889µs grafana | logger=migrator t=2025-06-17T07:45:46.586113016Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-17T07:45:46.587033355Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=920.009µs grafana | logger=migrator t=2025-06-17T07:45:46.591259338Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:46.593139896Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.879628ms grafana | logger=migrator t=2025-06-17T07:45:46.5983432Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-17T07:45:46.600421481Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.077751ms grafana | logger=migrator t=2025-06-17T07:45:46.604116628Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-17T07:45:46.604891146Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=774.118µs grafana | logger=migrator t=2025-06-17T07:45:46.636690049Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-17T07:45:46.638568728Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.878529ms grafana | logger=migrator t=2025-06-17T07:45:46.643493588Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-17T07:45:46.651474269Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.978611ms grafana | logger=migrator t=2025-06-17T07:45:46.654826233Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-17T07:45:46.66044968Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.622887ms grafana | logger=migrator t=2025-06-17T07:45:46.663997636Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-17T07:45:46.672035668Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.037212ms grafana | logger=migrator t=2025-06-17T07:45:46.676346282Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-17T07:45:46.684452875Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.105913ms grafana | logger=migrator t=2025-06-17T07:45:46.687771188Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-17T07:45:46.688801668Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.02728ms grafana | logger=migrator t=2025-06-17T07:45:46.692215184Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-17T07:45:46.693315514Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.09713ms grafana | logger=migrator t=2025-06-17T07:45:46.698671229Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-17T07:45:46.699678659Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.00678ms grafana | logger=migrator t=2025-06-17T07:45:46.70272277Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-17T07:45:46.71060525Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=7.87834ms grafana | logger=migrator t=2025-06-17T07:45:46.713820603Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-17T07:45:46.714941675Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.120032ms grafana | logger=migrator t=2025-06-17T07:45:46.719148587Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-17T07:45:46.720124547Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=975.89µs grafana | logger=migrator t=2025-06-17T07:45:46.723231549Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-17T07:45:46.724095987Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=864.038µs grafana | logger=migrator t=2025-06-17T07:45:46.728056637Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-17T07:45:46.729095008Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.038231ms grafana | logger=migrator t=2025-06-17T07:45:46.732569083Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-17T07:45:46.732585824Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=17.571µs grafana | logger=migrator t=2025-06-17T07:45:46.735983908Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-17T07:45:46.737308012Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.324004ms grafana | logger=migrator t=2025-06-17T07:45:46.741971388Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-17T07:45:46.742029439Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.381µs grafana | logger=migrator t=2025-06-17T07:45:46.745537866Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-17T07:45:46.74600489Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=466.744µs grafana | logger=migrator t=2025-06-17T07:45:46.749173402Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-17T07:45:46.749764248Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=591.536µs grafana | logger=migrator t=2025-06-17T07:45:46.75297845Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-17T07:45:46.75384356Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=864.08µs grafana | logger=migrator t=2025-06-17T07:45:46.758195123Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-17T07:45:46.758513857Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=318.324µs grafana | logger=migrator t=2025-06-17T07:45:46.761968821Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-17T07:45:46.762515378Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=547.447µs grafana | logger=migrator t=2025-06-17T07:45:46.765762481Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-17T07:45:46.766546269Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=783.388µs grafana | logger=migrator t=2025-06-17T07:45:46.770114274Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-17T07:45:46.771299327Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.181403ms grafana | logger=migrator t=2025-06-17T07:45:46.775436688Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-17T07:45:46.783611752Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.174034ms grafana | logger=migrator t=2025-06-17T07:45:46.786894235Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-17T07:45:46.786912175Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=18.37µs grafana | logger=migrator t=2025-06-17T07:45:46.790398651Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-17T07:45:46.791370791Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=971.13µs grafana | logger=migrator t=2025-06-17T07:45:46.795781736Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-17T07:45:46.796838366Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.05719ms grafana | logger=migrator t=2025-06-17T07:45:46.800079469Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-17T07:45:46.80112759Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.047821ms grafana | logger=migrator t=2025-06-17T07:45:46.804464984Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-17T07:45:46.813797509Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.333885ms grafana | logger=migrator t=2025-06-17T07:45:46.817116242Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.818179134Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.062642ms grafana | logger=migrator t=2025-06-17T07:45:46.822277945Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.823283525Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.00468ms grafana | logger=migrator t=2025-06-17T07:45:46.826823301Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:46.849606673Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.782192ms grafana | logger=migrator t=2025-06-17T07:45:46.853787595Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-17T07:45:46.854552523Z level=info msg="Migration successfully executed" id="create correlation v2" duration=764.718µs grafana | logger=migrator t=2025-06-17T07:45:46.857761896Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:46.858546173Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=783.977µs grafana | logger=migrator t=2025-06-17T07:45:46.861548124Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:46.862321622Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=772.938µs grafana | logger=migrator t=2025-06-17T07:45:46.866185031Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-17T07:45:46.866965719Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=779.878µs grafana | logger=migrator t=2025-06-17T07:45:46.893204105Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:46.89367139Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=469.725µs grafana | logger=migrator t=2025-06-17T07:45:46.896868923Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:46.897822013Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=952.95µs grafana | logger=migrator t=2025-06-17T07:45:46.900804743Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-17T07:45:46.909782864Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.97786ms grafana | logger=migrator t=2025-06-17T07:45:46.914829285Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-17T07:45:46.924156969Z level=info msg="Migration successfully executed" id="add type column" duration=9.284654ms grafana | logger=migrator t=2025-06-17T07:45:46.927834868Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-17T07:45:46.928594685Z level=info msg="Migration successfully executed" id="create entity_events table" duration=759.637µs grafana | logger=migrator t=2025-06-17T07:45:46.939306964Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-17T07:45:46.941423506Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.120092ms grafana | logger=migrator t=2025-06-17T07:45:46.945888451Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.948629919Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.953500278Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.953954853Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.962153706Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-17T07:45:46.963883814Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.728838ms grafana | logger=migrator t=2025-06-17T07:45:46.97141634Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-17T07:45:46.972604652Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.187732ms grafana | logger=migrator t=2025-06-17T07:45:46.975924196Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.977541212Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.615606ms grafana | logger=migrator t=2025-06-17T07:45:46.987920568Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-17T07:45:46.989754557Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.833419ms grafana | logger=migrator t=2025-06-17T07:45:46.997826299Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:46.998855479Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.02854ms grafana | logger=migrator t=2025-06-17T07:45:47.002502006Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:47.004098892Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.597196ms grafana | logger=migrator t=2025-06-17T07:45:47.008551567Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-17T07:45:47.009902581Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.349644ms grafana | logger=migrator t=2025-06-17T07:45:47.01467493Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-17T07:45:47.015826691Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.151231ms grafana | logger=migrator t=2025-06-17T07:45:47.020505489Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:47.021742361Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.239802ms grafana | logger=migrator t=2025-06-17T07:45:47.028056585Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:47.029172467Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.115882ms grafana | logger=migrator t=2025-06-17T07:45:47.032324659Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-17T07:45:47.033384979Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.06041ms grafana | logger=migrator t=2025-06-17T07:45:47.037393811Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-17T07:45:47.063253783Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.858862ms grafana | logger=migrator t=2025-06-17T07:45:47.069491426Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-17T07:45:47.076249875Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.757119ms grafana | logger=migrator t=2025-06-17T07:45:47.080156004Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-17T07:45:47.088740752Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.583528ms grafana | logger=migrator t=2025-06-17T07:45:47.091766902Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-17T07:45:47.091986024Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=219.502µs grafana | logger=migrator t=2025-06-17T07:45:47.097029636Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-17T07:45:47.103645032Z level=info msg="Migration successfully executed" id="add share column" duration=6.614006ms grafana | logger=migrator t=2025-06-17T07:45:47.10728324Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-17T07:45:47.107457152Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=173.862µs grafana | logger=migrator t=2025-06-17T07:45:47.111168039Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-17T07:45:47.112073308Z level=info msg="Migration successfully executed" id="create file table" duration=904.869µs grafana | logger=migrator t=2025-06-17T07:45:47.115861817Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-17T07:45:47.117082319Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.219712ms grafana | logger=migrator t=2025-06-17T07:45:47.123293692Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-17T07:45:47.124686366Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.392584ms grafana | logger=migrator t=2025-06-17T07:45:47.156033705Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-17T07:45:47.157793343Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.757748ms grafana | logger=migrator t=2025-06-17T07:45:47.162787504Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-17T07:45:47.164588191Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.800307ms grafana | logger=migrator t=2025-06-17T07:45:47.170422971Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-17T07:45:47.170441111Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=18.571µs grafana | logger=migrator t=2025-06-17T07:45:47.173444801Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-17T07:45:47.173465691Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.75µs grafana | logger=migrator t=2025-06-17T07:45:47.176902807Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-17T07:45:47.177809136Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=902.399µs grafana | logger=migrator t=2025-06-17T07:45:47.183549194Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-17T07:45:47.183808607Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=285.513µs grafana | logger=migrator t=2025-06-17T07:45:47.187206511Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-17T07:45:47.188587016Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.379615ms grafana | logger=migrator t=2025-06-17T07:45:47.192628246Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-17T07:45:47.201745418Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.116512ms grafana | logger=migrator t=2025-06-17T07:45:47.205915781Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-17T07:45:47.206076583Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=160.542µs grafana | logger=migrator t=2025-06-17T07:45:47.209310226Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-17T07:45:47.211187865Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.875789ms grafana | logger=migrator t=2025-06-17T07:45:47.215117724Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-17T07:45:47.21572851Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=610.706µs grafana | logger=migrator t=2025-06-17T07:45:47.219114505Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-17T07:45:47.219311047Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=196.222µs grafana | logger=migrator t=2025-06-17T07:45:47.223245617Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-17T07:45:47.223742422Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=496.245µs grafana | logger=migrator t=2025-06-17T07:45:47.226316528Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-17T07:45:47.239808945Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=13.492797ms grafana | logger=migrator t=2025-06-17T07:45:47.243000977Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-17T07:45:47.250396333Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.394486ms grafana | logger=migrator t=2025-06-17T07:45:47.253442914Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-17T07:45:47.254525975Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.082171ms grafana | logger=migrator t=2025-06-17T07:45:47.258730508Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-17T07:45:47.338186494Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=79.453686ms grafana | logger=migrator t=2025-06-17T07:45:47.341870471Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-17T07:45:47.342887622Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.014531ms grafana | logger=migrator t=2025-06-17T07:45:47.347467078Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-17T07:45:47.348266757Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=799.389µs grafana | logger=migrator t=2025-06-17T07:45:47.35156642Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-17T07:45:47.381118039Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=29.547759ms grafana | logger=migrator t=2025-06-17T07:45:47.385578685Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-17T07:45:47.392777968Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.198793ms grafana | logger=migrator t=2025-06-17T07:45:47.417327887Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-17T07:45:47.418066645Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=741.068µs grafana | logger=migrator t=2025-06-17T07:45:47.421613141Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-17T07:45:47.421945554Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=331.503µs grafana | logger=migrator t=2025-06-17T07:45:47.425214187Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-17T07:45:47.42543526Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=220.813µs grafana | logger=migrator t=2025-06-17T07:45:47.428340349Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-17T07:45:47.428535181Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=192.022µs grafana | logger=migrator t=2025-06-17T07:45:47.432785864Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-17T07:45:47.432993886Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=207.222µs grafana | logger=migrator t=2025-06-17T07:45:47.435923756Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-17T07:45:47.43721167Z level=info msg="Migration successfully executed" id="create folder table" duration=1.286754ms grafana | logger=migrator t=2025-06-17T07:45:47.44021893Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-17T07:45:47.441421562Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.201802ms grafana | logger=migrator t=2025-06-17T07:45:47.44614465Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-17T07:45:47.447268611Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.123461ms grafana | logger=migrator t=2025-06-17T07:45:47.450232121Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-17T07:45:47.450261412Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.041µs grafana | logger=migrator t=2025-06-17T07:45:47.45309576Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-17T07:45:47.454247553Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.151063ms grafana | logger=migrator t=2025-06-17T07:45:47.458754018Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-17T07:45:47.460581567Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.823298ms grafana | logger=migrator t=2025-06-17T07:45:47.466385925Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-17T07:45:47.468173994Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.786999ms grafana | logger=migrator t=2025-06-17T07:45:47.471500718Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-17T07:45:47.472164974Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=663.426µs grafana | logger=migrator t=2025-06-17T07:45:47.476367926Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-17T07:45:47.47663963Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=273.694µs grafana | logger=migrator t=2025-06-17T07:45:47.481493528Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-17T07:45:47.483302707Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.808909ms grafana | logger=migrator t=2025-06-17T07:45:47.486699691Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-17T07:45:47.490545081Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=3.83744ms grafana | logger=migrator t=2025-06-17T07:45:47.496388101Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-17T07:45:47.497764764Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.380294ms grafana | logger=migrator t=2025-06-17T07:45:47.5022929Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-17T07:45:47.504378111Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.090101ms grafana | logger=migrator t=2025-06-17T07:45:47.508788246Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-17T07:45:47.509939487Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.150921ms grafana | logger=migrator t=2025-06-17T07:45:47.513008739Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-17T07:45:47.5141378Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.128331ms grafana | logger=migrator t=2025-06-17T07:45:47.518547066Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-17T07:45:47.519466855Z level=info msg="Migration successfully executed" id="create anon_device table" duration=919.449µs grafana | logger=migrator t=2025-06-17T07:45:47.522565076Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-17T07:45:47.523918019Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.353043ms grafana | logger=migrator t=2025-06-17T07:45:47.528475326Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-17T07:45:47.530431276Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.95846ms grafana | logger=migrator t=2025-06-17T07:45:47.535309505Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-17T07:45:47.536164934Z level=info msg="Migration successfully executed" id="create signing_key table" duration=855.299µs grafana | logger=migrator t=2025-06-17T07:45:47.539333706Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-17T07:45:47.540562198Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.223502ms grafana | logger=migrator t=2025-06-17T07:45:47.544101065Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-17T07:45:47.545980473Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.879058ms grafana | logger=migrator t=2025-06-17T07:45:47.55053352Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-17T07:45:47.550841513Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=308.423µs grafana | logger=migrator t=2025-06-17T07:45:47.556676042Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-17T07:45:47.571403241Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=14.727029ms grafana | logger=migrator t=2025-06-17T07:45:47.574774276Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-17T07:45:47.575299821Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=526.715µs grafana | logger=migrator t=2025-06-17T07:45:47.580452174Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-17T07:45:47.580473224Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=20.78µs grafana | logger=migrator t=2025-06-17T07:45:47.584673886Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-17T07:45:47.585924419Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.250113ms grafana | logger=migrator t=2025-06-17T07:45:47.590165762Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-17T07:45:47.590195912Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=31.41µs grafana | logger=migrator t=2025-06-17T07:45:47.593821039Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-17T07:45:47.59586214Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.040971ms grafana | logger=migrator t=2025-06-17T07:45:47.600389556Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-17T07:45:47.601510527Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.120341ms grafana | logger=migrator t=2025-06-17T07:45:47.604872781Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-17T07:45:47.606493388Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.615607ms grafana | logger=migrator t=2025-06-17T07:45:47.61160736Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-17T07:45:47.613275806Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.667436ms grafana | logger=migrator t=2025-06-17T07:45:47.61649225Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-17T07:45:47.617325598Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=832.758µs grafana | logger=migrator t=2025-06-17T07:45:47.620615081Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-17T07:45:47.620888304Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=276.203µs grafana | logger=migrator t=2025-06-17T07:45:47.62544931Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-17T07:45:47.626147957Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=697.857µs grafana | logger=migrator t=2025-06-17T07:45:47.629133017Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-17T07:45:47.630064497Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=930.48µs grafana | logger=migrator t=2025-06-17T07:45:47.633560753Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-17T07:45:47.635048497Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.487154ms grafana | logger=migrator t=2025-06-17T07:45:47.641766426Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-17T07:45:47.654888109Z level=info msg="Migration successfully executed" id="add stack_id column" duration=13.120523ms grafana | logger=migrator t=2025-06-17T07:45:47.659226513Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-17T07:45:47.668579388Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.351835ms grafana | logger=migrator t=2025-06-17T07:45:47.671618589Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-17T07:45:47.678759082Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.139123ms grafana | logger=migrator t=2025-06-17T07:45:47.682005794Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-17T07:45:47.692011116Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.004082ms grafana | logger=migrator t=2025-06-17T07:45:47.696459911Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-17T07:45:47.696629733Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=169.402µs grafana | logger=migrator t=2025-06-17T07:45:47.699713904Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-17T07:45:47.700834766Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.120332ms grafana | logger=migrator t=2025-06-17T07:45:47.704787035Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-17T07:45:47.714765777Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.977422ms grafana | logger=migrator t=2025-06-17T07:45:47.719552895Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-17T07:45:47.719719037Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=166.052µs grafana | logger=migrator t=2025-06-17T07:45:47.721908719Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-17T07:45:47.723027221Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.118272ms grafana | logger=migrator t=2025-06-17T07:45:47.726278544Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:47.754359959Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=28.081355ms grafana | logger=migrator t=2025-06-17T07:45:47.7603661Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-17T07:45:47.761873745Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.506885ms grafana | logger=migrator t=2025-06-17T07:45:47.764961467Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:47.766177539Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.214912ms grafana | logger=migrator t=2025-06-17T07:45:47.769569623Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:47.769924657Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=354.344µs grafana | logger=migrator t=2025-06-17T07:45:47.774756096Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:47.775593954Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=836.768µs grafana | logger=migrator t=2025-06-17T07:45:47.779254002Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-17T07:45:47.807877683Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=28.621301ms grafana | logger=migrator t=2025-06-17T07:45:47.812390558Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-17T07:45:47.81350638Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.115292ms grafana | logger=migrator t=2025-06-17T07:45:47.818525611Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-17T07:45:47.820322348Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.796017ms grafana | logger=migrator t=2025-06-17T07:45:47.823947336Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-17T07:45:47.82443047Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=483.084µs grafana | logger=migrator t=2025-06-17T07:45:47.827977976Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-17T07:45:47.828920806Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=938.8µs grafana | logger=migrator t=2025-06-17T07:45:47.833689784Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-17T07:45:47.84512448Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.433146ms grafana | logger=migrator t=2025-06-17T07:45:47.848794437Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-17T07:45:47.857163142Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=8.367225ms grafana | logger=migrator t=2025-06-17T07:45:47.862337375Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-17T07:45:47.873447867Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=11.109232ms grafana | logger=migrator t=2025-06-17T07:45:47.877046695Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-17T07:45:47.884075606Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=7.028481ms grafana | logger=migrator t=2025-06-17T07:45:47.889201788Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-17T07:45:47.899630924Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=10.427876ms grafana | logger=migrator t=2025-06-17T07:45:47.92778684Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-17T07:45:47.942519679Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=14.732329ms grafana | logger=migrator t=2025-06-17T07:45:47.945903433Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-17T07:45:47.946807522Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=903.679µs grafana | logger=migrator t=2025-06-17T07:45:47.9524512Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-17T07:45:47.988566836Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.114966ms grafana | logger=migrator t=2025-06-17T07:45:47.992445296Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-17T07:45:48.001904902Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.458466ms grafana | logger=migrator t=2025-06-17T07:45:48.005529828Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-17T07:45:48.012714292Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.184314ms grafana | logger=migrator t=2025-06-17T07:45:48.016878183Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-17T07:45:48.026451741Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=9.572558ms grafana | logger=migrator t=2025-06-17T07:45:48.032898276Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-17T07:45:48.042749857Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.850791ms grafana | logger=migrator t=2025-06-17T07:45:48.046183961Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-17T07:45:48.046203861Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=20.57µs grafana | logger=migrator t=2025-06-17T07:45:48.050276702Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-17T07:45:48.050297893Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=21.961µs grafana | logger=migrator t=2025-06-17T07:45:48.053467274Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:48.063372315Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.899971ms grafana | logger=migrator t=2025-06-17T07:45:48.066871801Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.076469189Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.578678ms grafana | logger=migrator t=2025-06-17T07:45:48.080994194Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-17T07:45:48.081322947Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=328.433µs grafana | logger=migrator t=2025-06-17T07:45:48.08445539Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-17T07:45:48.084698072Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=242.272µs grafana | logger=migrator t=2025-06-17T07:45:48.089025406Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:48.102077828Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=13.053082ms grafana | logger=migrator t=2025-06-17T07:45:48.105708095Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.115315393Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.606798ms grafana | logger=migrator t=2025-06-17T07:45:48.119785618Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-17T07:45:48.127322584Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.536196ms grafana | logger=migrator t=2025-06-17T07:45:48.13088744Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-17T07:45:48.140761041Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.871841ms grafana | logger=migrator t=2025-06-17T07:45:48.144069784Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-17T07:45:48.144514038Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=446.804µs grafana | logger=migrator t=2025-06-17T07:45:48.148435348Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:48.16043358Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=12.002122ms grafana | logger=migrator t=2025-06-17T07:45:48.192820318Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.20474783Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.928101ms grafana | logger=migrator t=2025-06-17T07:45:48.210101604Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-17T07:45:48.210383747Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=281.763µs grafana | logger=migrator t=2025-06-17T07:45:48.21364742Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-17T07:45:48.214390287Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=740.037µs grafana | logger=migrator t=2025-06-17T07:45:48.218728061Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-17T07:45:48.220485439Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.757098ms grafana | logger=migrator t=2025-06-17T07:45:48.226451749Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-17T07:45:48.22650808Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=56.951µs grafana | logger=migrator t=2025-06-17T07:45:48.232314339Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-17T07:45:48.23241469Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=102.121µs grafana | logger=migrator t=2025-06-17T07:45:48.236247839Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-17T07:45:48.237012737Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=764.068µs grafana | logger=migrator t=2025-06-17T07:45:48.240588093Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.250207641Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=9.619678ms grafana | logger=migrator t=2025-06-17T07:45:48.255582316Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:48.264406875Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=8.818249ms grafana | logger=migrator t=2025-06-17T07:45:48.268012091Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-17T07:45:48.270175713Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=2.167412ms grafana | logger=migrator t=2025-06-17T07:45:48.276888631Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-17T07:45:48.278502478Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.613797ms grafana | logger=migrator t=2025-06-17T07:45:48.282423987Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-17T07:45:48.293582971Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.158723ms grafana | logger=migrator t=2025-06-17T07:45:48.300684783Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.309189599Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=8.503346ms grafana | logger=migrator t=2025-06-17T07:45:48.313933806Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-17T07:45:48.313966117Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-17T07:45:48.31419973Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-17T07:45:48.31421763Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=283.684µs grafana | logger=migrator t=2025-06-17T07:45:48.317233721Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-17T07:45:48.317826436Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=592.395µs grafana | logger=migrator t=2025-06-17T07:45:48.323020359Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-17T07:45:48.324769786Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.747547ms grafana | logger=migrator t=2025-06-17T07:45:48.330290262Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-17T07:45:48.331532646Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.241804ms grafana | logger=migrator t=2025-06-17T07:45:48.334714448Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-17T07:45:48.335859559Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.144651ms grafana | logger=migrator t=2025-06-17T07:45:48.33887383Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-17T07:45:48.340021752Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.147722ms grafana | logger=migrator t=2025-06-17T07:45:48.342948192Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:48.35365463Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.705178ms grafana | logger=migrator t=2025-06-17T07:45:48.358054964Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:48.367985745Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.930161ms grafana | logger=migrator t=2025-06-17T07:45:48.371608532Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-17T07:45:48.379633284Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=8.023962ms grafana | logger=migrator t=2025-06-17T07:45:48.385621494Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-17T07:45:48.39603177Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.409476ms grafana | logger=migrator t=2025-06-17T07:45:48.400392344Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-17T07:45:48.400568785Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-17T07:45:48.400585265Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=192.451µs grafana | logger=migrator t=2025-06-17T07:45:48.403344143Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-17T07:45:48.404174733Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=830.349µs grafana | logger=migrator t=2025-06-17T07:45:48.407244114Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.100747712s grafana | logger=migrator t=2025-06-17T07:45:48.408338324Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-17T07:45:48.428202846Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-17T07:45:48.428412858Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-17T07:45:48.47192811Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-17T07:45:48.572909774Z level=info msg="Restored cache from database" duration=590.896µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.583758524Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-17T07:45:48.583774024Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-17T07:45:48.591557903Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-17T07:45:48.59229128Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=733.347µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.59518187Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-17T07:45:48.5951925Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=11.26µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.599494053Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-17T07:45:48.599549994Z level=info msg="Migration successfully executed" id="drop table resource" duration=55.471µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.601891838Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-17T07:45:48.603653156Z level=info msg="Migration successfully executed" id="create table resource" duration=1.764358ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.60704915Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-17T07:45:48.60899951Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.94956ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.61197887Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.612053481Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=74.401µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.616443895Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.617690468Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.246733ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.62086017Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-17T07:45:48.622177254Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.316724ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.625362526Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-17T07:45:48.626575378Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.213122ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.630850252Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-17T07:45:48.630928413Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=78.01µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.634310677Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-17T07:45:48.635136355Z level=info msg="Migration successfully executed" id="create table resource_version" duration=825.178µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.638756671Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-17T07:45:48.64060739Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.850429ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.646134107Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-17T07:45:48.646211277Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=77.63µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.650967996Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-17T07:45:48.653208439Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.239483ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.656590743Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-17T07:45:48.658571243Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.97961ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.664149299Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-17T07:45:48.665369242Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.219203ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.668521153Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.680729077Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=12.206634ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.68396548Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-17T07:45:48.692856511Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.888941ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.696953262Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-17T07:45:48.698244095Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.290523ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.702643029Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-17T07:45:48.70464795Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.972581ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.737052279Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.747422564Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.369495ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.752047161Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-17T07:45:48.762548228Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.499076ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.766851201Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-17T07:45:48.766878931Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-17T07:45:48.767337305Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=485.624µs grafana | logger=resource-migrator t=2025-06-17T07:45:48.770756961Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-17T07:45:48.772130214Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.376263ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.77655922Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.788728303Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.167013ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.792909305Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-17T07:45:48.79434821Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.438415ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.797830205Z level=info msg="migrations completed" performed=26 skipped=0 duration=206.326272ms grafana | logger=resource-migrator t=2025-06-17T07:45:48.798452081Z level=info msg="Unlocking database" grafana | t=2025-06-17T07:45:48.798697254Z level=info caller=logger.go:214 time=2025-06-17T07:45:48.798671204Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-17T07:45:48.810524964Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-17T07:45:48.849394338Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-17T07:45:48.849422599Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-17T07:45:48.849479119Z level=info msg="Plugins loaded" count=53 duration=38.955905ms grafana | logger=query_data t=2025-06-17T07:45:48.854621001Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-17T07:45:48.859938025Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-17T07:45:48.877660515Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-17T07:45:48.885000939Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-17T07:45:48.885017879Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-17T07:45:48.887213722Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-17T07:45:48.890407864Z level=info msg="Warming state cache for startup" grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:48.890731927Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=grafanaStorageLogger t=2025-06-17T07:45:48.89197013Z level=info msg="Storage starting" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-17T07:45:48.89200474Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-17T07:45:48.894462965Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2025-06-17T07:45:48.988984053Z level=info msg="Update check succeeded" duration=98.511099ms grafana | logger=grafana.update.checker t=2025-06-17T07:45:49.002204758Z level=info msg="Update check succeeded" duration=110.091217ms grafana | logger=sqlstore.transactions t=2025-06-17T07:45:49.030415904Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=ngalert.state.manager t=2025-06-17T07:45:49.074028856Z level=info msg="State cache has been initialized" states=0 duration=183.609282ms grafana | logger=ngalert.scheduler t=2025-06-17T07:45:49.074106787Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-17T07:45:49.074260898Z level=info msg=starting first_tick=2025-06-17T07:45:50Z grafana | logger=provisioning.datasources t=2025-06-17T07:45:49.0773583Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-17T07:45:49.107329554Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-17T07:45:49.107374684Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-17T07:45:49.109601237Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-17T07:45:49.174551235Z level=info msg="Patterns update finished" duration=104.51186ms grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.266047622Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.267594078Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.268181843Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.268793329Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.275755781Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.276329316Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.276787881Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.277772981Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-17T07:45:49.279061854Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=plugin.installer t=2025-06-17T07:45:49.291221147Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=app-registry t=2025-06-17T07:45:49.340553507Z level=info msg="app registry initialized" grafana | logger=installer.fs t=2025-06-17T07:45:49.360977274Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-17T07:45:49.398127161Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:49.398281263Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=507.495286ms grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:49.398403734Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-17T07:45:49.634992461Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-17T07:45:49.711362915Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-17T07:45:49.739302698Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:49.739337168Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=340.846963ms grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:49.739358428Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-17T07:45:49.837241121Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-17T07:45:50.01924474Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-17T07:45:50.14994252Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-17T07:45:50.175432646Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:50.175459816Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=436.094768ms grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:50.175485237Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-17T07:45:50.385886655Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-17T07:45:50.437673228Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-17T07:45:50.453225251Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-17T07:45:50.453247781Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=277.756554ms grafana | logger=infra.usagestats t=2025-06-17T07:46:41.896110551Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-17 07:45:43,342] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,342] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,342] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,343] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,346] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,349] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-17 07:45:43,353] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-17 07:45:43,360] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:43,377] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:43,377] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:43,387] INFO Socket connection established, initiating session, client: /172.17.0.5:52252, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:43,448] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002d4e00000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:43,567] INFO Session: 0x1000002d4e00000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:43,568] INFO EventThread shut down for session: 0x1000002d4e00000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-17 07:45:44,373] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-17 07:45:44,656] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-17 07:45:44,752] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-17 07:45:44,753] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-17 07:45:44,753] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-17 07:45:44,767] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-17 07:45:44,771] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,771] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,771] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,771] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,771] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,771] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,772] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,774] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-17 07:45:44,777] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-17 07:45:44,784] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:44,785] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-17 07:45:44,789] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:44,797] INFO Socket connection established, initiating session, client: /172.17.0.5:52254, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:44,808] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002d4e00001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-17 07:45:44,813] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-17 07:45:45,312] INFO Cluster ID = PJ3-IXoLThugmvvSQZmOKA (kafka.server.KafkaServer) kafka | [2025-06-17 07:45:45,317] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-17 07:45:45,376] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-17 07:45:45,418] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-17 07:45:45,418] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-17 07:45:45,418] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-17 07:45:45,420] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-17 07:45:45,461] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-17 07:45:45,464] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-17 07:45:45,479] INFO Loaded 0 logs in 18ms. (kafka.log.LogManager) kafka | [2025-06-17 07:45:45,479] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-17 07:45:45,481] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-17 07:45:45,501] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-17 07:45:45,551] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-17 07:45:45,566] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-17 07:45:45,581] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-17 07:45:45,651] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-17 07:45:46,015] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-17 07:45:46,018] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-17 07:45:46,047] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-17 07:45:46,048] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-17 07:45:46,048] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-17 07:45:46,056] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-17 07:45:46,062] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-17 07:45:46,082] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,083] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,087] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,089] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,107] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-17 07:45:46,132] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-17 07:45:46,164] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750146346151,1750146346151,1,0,0,72057606199312385,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-17 07:45:46,166] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-17 07:45:46,238] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-17 07:45:46,257] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,262] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,273] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,278] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-17 07:45:46,290] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,292] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:45:46,295] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,303] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-17 07:45:46,313] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:45:46,337] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-17 07:45:46,347] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-17 07:45:46,347] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-17 07:45:46,350] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,354] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-17 07:45:46,357] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,361] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,363] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,390] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-17 07:45:46,392] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,402] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,408] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-17 07:45:46,421] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-17 07:45:46,423] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,423] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,423] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,425] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,430] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-17 07:45:46,434] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,435] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,435] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,436] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-17 07:45:46,439] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,443] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-17 07:45:46,452] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-17 07:45:46,455] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-17 07:45:46,456] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-17 07:45:46,464] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-17 07:45:46,464] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-17 07:45:46,465] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-17 07:45:46,465] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-17 07:45:46,467] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-17 07:45:46,468] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-17 07:45:46,469] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,481] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-17 07:45:46,481] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-17 07:45:46,481] INFO Kafka startTimeMs: 1750146346470 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-17 07:45:46,483] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-17 07:45:46,485] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,486] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,486] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,487] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,493] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,509] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:46,550] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-17 07:45:46,569] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-17 07:45:46,570] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-17 07:45:51,513] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-17 07:45:51,513] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,410] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,414] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-17 07:46:23,416] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-17 07:46:23,425] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,456] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(t1VS_Q5pRTinFA1Xdrz0oA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,457] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,459] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,459] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,464] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,464] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,496] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,499] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-17 07:46:23,500] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-17 07:46:23,502] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,502] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,503] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,508] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,508] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,520] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(3lIdmHXGRNC7w_Q6Cc-AOw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,521] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-17 07:46:23,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,526] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-17 07:46:23,528] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,532] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-17 07:46:23,533] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,536] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,536] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,536] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,537] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,537] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,537] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,538] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-17 07:46:23,544] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,545] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,545] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,545] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,546] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,546] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,546] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,546] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-17 07:46:23,550] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,643] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,670] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,675] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,676] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,681] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(t1VS_Q5pRTinFA1Xdrz0oA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,699] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-17 07:46:23,706] INFO [Broker id=1] Finished LeaderAndIsr request in 199ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,712] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=t1VS_Q5pRTinFA1Xdrz0oA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-17 07:46:23,718] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-17 07:46:23,719] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-17 07:46:23,720] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-17 07:46:23,775] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-17 07:46:23,775] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-17 07:46:23,775] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-17 07:46:23,775] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-17 07:46:23,775] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-17 07:46:23,776] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-17 07:46:23,777] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-17 07:46:23,778] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-17 07:46:23,778] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-17 07:46:23,779] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,783] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-17 07:46:23,784] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,791] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,791] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,792] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,793] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-17 07:46:23,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-17 07:46:23,821] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-17 07:46:23,823] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-17 07:46:23,824] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2025-06-17 07:46:23,847] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,855] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,855] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,858] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,859] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,872] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,873] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,873] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,873] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,873] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,891] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,891] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,892] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,892] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,892] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,917] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,918] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,918] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,918] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,918] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,935] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,935] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,935] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,935] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,935] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,956] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,957] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,958] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,958] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,958] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:23,969] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:23,970] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:23,970] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,970] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:23,970] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,008] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,009] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,009] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,009] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,009] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,017] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,026] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,026] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,026] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,026] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,033] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,034] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,034] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,034] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,034] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,042] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,043] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,043] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,043] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,043] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,053] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,054] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,055] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,055] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,055] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,064] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,065] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,065] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,066] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,067] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,074] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,074] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,075] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,075] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,075] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,086] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,087] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,089] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,089] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,089] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,096] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,097] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,097] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,097] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,097] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,107] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,110] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,110] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,110] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,110] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,119] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,120] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,120] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,120] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,120] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,127] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,128] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,128] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,128] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,128] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,136] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,137] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,137] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,137] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,137] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,149] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,150] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,150] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,150] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,150] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,159] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,160] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,160] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,160] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,160] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,178] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,179] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,179] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,179] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,179] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,195] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,196] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,197] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,197] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,197] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,207] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,208] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,208] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,208] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,208] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,221] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,221] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,221] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,221] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,222] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,236] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,237] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,237] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,237] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,237] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,271] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,272] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,272] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,272] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,272] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,280] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,280] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,280] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,280] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,281] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,288] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,288] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,288] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,288] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,289] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,305] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,306] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,306] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,307] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,307] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,319] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,320] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,320] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,320] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,321] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,330] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,331] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,331] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,331] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,331] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,339] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,340] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,340] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,340] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,340] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,348] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,349] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,349] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,349] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,349] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,358] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,359] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,359] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,360] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,360] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,366] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,370] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,370] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,370] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,371] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,377] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,378] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,378] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,378] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,378] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,387] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,388] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,389] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,389] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,389] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,399] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,402] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,402] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,403] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,403] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,412] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,413] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,413] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,413] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,413] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,423] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,425] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,425] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,425] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,425] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,436] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,437] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,438] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,438] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,438] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,444] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,445] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,445] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,445] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,445] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,450] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,451] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,451] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,451] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,451] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,464] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,467] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,467] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,467] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,468] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,476] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,478] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,478] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,478] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,478] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,486] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,486] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,486] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,486] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,486] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,496] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,496] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,496] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,496] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,496] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,531] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-17 07:46:24,532] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-17 07:46:24,532] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,532] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-17 07:46:24,532] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(3lIdmHXGRNC7w_Q6Cc-AOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-17 07:46:24,537] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-17 07:46:24,538] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-17 07:46:24,539] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-17 07:46:24,540] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-17 07:46:24,541] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-17 07:46:24,542] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-17 07:46:24,542] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-17 07:46:24,543] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,545] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,546] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,546] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,546] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,546] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,547] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,547] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,553] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,558] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,558] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,558] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,558] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,558] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,558] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,558] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,559] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,559] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,561] INFO [Broker id=1] Finished LeaderAndIsr request in 770ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-17 07:46:24,561] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,562] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,562] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,562] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,562] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,562] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,563] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,563] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,563] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=3lIdmHXGRNC7w_Q6Cc-AOw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-17 07:46:24,563] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,563] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,563] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,564] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,564] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,564] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,565] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,566] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-17 07:46:24,567] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-17 07:46:24,573] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,573] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,574] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,575] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,576] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,577] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,578] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,579] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,579] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:24,579] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-17 07:46:25,023] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:25,037] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:25,070] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 027c289e-8fec-4039-96c7-99c4e91494f7 in Empty state. Created a new member id consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:25,078] INFO [GroupCoordinator 1]: Preparing to rebalance group 027c289e-8fec-4039-96c7-99c4e91494f7 in state PreparingRebalance with old generation 0 (__consumer_offsets-17) (reason: Adding new member consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913 with group instance id None; client reason: need to re-join with the given member-id: consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:26,582] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4043d709-3d23-415f-b317-2540fafae824 in Empty state. Created a new member id consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:26,585] INFO [GroupCoordinator 1]: Preparing to rebalance group 4043d709-3d23-415f-b317-2540fafae824 in state PreparingRebalance with old generation 0 (__consumer_offsets-30) (reason: Adding new member consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb with group instance id None; client reason: need to re-join with the given member-id: consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:28,051] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:28,076] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:28,079] INFO [GroupCoordinator 1]: Stabilized group 027c289e-8fec-4039-96c7-99c4e91494f7 generation 1 (__consumer_offsets-17) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:28,086] INFO [GroupCoordinator 1]: Assignment received from leader consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913 for group 027c289e-8fec-4039-96c7-99c4e91494f7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:29,586] INFO [GroupCoordinator 1]: Stabilized group 4043d709-3d23-415f-b317-2540fafae824 generation 1 (__consumer_offsets-30) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-17 07:46:29,612] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb for group 4043d709-3d23-415f-b317-2540fafae824 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-17T07:46:01.248+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-17T07:46:01.310+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-17T07:46:01.311+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-17T07:46:02.724+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-17T07:46:02.889+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 154 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-17T07:46:03.542+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-17T07:46:03.557+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-17T07:46:03.560+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-17T07:46:03.560+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-17T07:46:03.601+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-17T07:46:03.602+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2228 ms policy-api | [2025-06-17T07:46:03.924+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-17T07:46:04.011+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-17T07:46:04.059+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-17T07:46:04.469+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-17T07:46:04.505+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-17T07:46:04.762+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd policy-api | [2025-06-17T07:46:04.765+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-17T07:46:04.867+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-17T07:46:07.007+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-17T07:46:07.010+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-17T07:46:07.677+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-17T07:46:08.567+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-17T07:46:09.678+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-17T07:46:09.726+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-17T07:46:10.403+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-17T07:46:10.569+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-17T07:46:10.599+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-17T07:46:10.624+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.071 seconds (process running for 10.653) policy-api | [2025-06-17T07:46:39.924+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-17T07:46:39.925+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-17T07:46:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.360136 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.435311 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.491296 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.551212 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.610168 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.673895 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.727593 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.779085 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.831455 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.88347 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:47.947829 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.00065 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.055271 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.109936 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.158922 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.246986 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.301194 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.348323 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.406373 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.495273 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.551753 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.61457 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.669671 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.721164 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.783875 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.839358 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.894703 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:48.945401 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.021977 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.075947 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.12254 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.174289 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.22906 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.313264 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.366772 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.427234 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.491496 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.550731 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.609673 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.664826 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.718973 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.777687 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.840089 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.89486 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:49.953197 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.005704 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.07279 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.128098 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.185889 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.239959 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.290079 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.35576 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.408928 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.473467 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.527294 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.607368 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.656423 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.712597 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.766408 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.823203 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.892635 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.943463 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:50.997177 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.054701 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.12882 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.183792 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.235953 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.290318 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.342747 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.410096 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.46686 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.524036 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.579361 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.638984 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.691518 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.743914 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.799902 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.8532 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.931343 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:51.98395 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.03637 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.087168 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.13677 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.203186 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.257398 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.311483 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.367859 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.437774 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.49224 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.547819 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.603763 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.659294 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.724289 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.775973 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.825492 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1706250745470800u | 1 | 2025-06-17 07:45:52.878172 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:52.953579 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.008103 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.062841 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.114968 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.176153 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.233939 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.289324 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.342627 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.398119 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.462999 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.515314 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.57519 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1706250745470900u | 1 | 2025-06-17 07:45:53.625973 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.684569 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.750147 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.803824 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.862315 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.915332 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:53.983545 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:54.03897 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:54.095751 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1706250745471000u | 1 | 2025-06-17 07:45:54.147716 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1706250745471100u | 1 | 2025-06-17 07:45:54.195382 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1706250745471200u | 1 | 2025-06-17 07:45:54.268904 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1706250745471200u | 1 | 2025-06-17 07:45:54.317282 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1706250745471200u | 1 | 2025-06-17 07:45:54.378335 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1706250745471200u | 1 | 2025-06-17 07:45:54.440334 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1706250745471300u | 1 | 2025-06-17 07:45:54.502329 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1706250745471300u | 1 | 2025-06-17 07:45:54.554162 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1706250745471300u | 1 | 2025-06-17 07:45:54.607164 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.273649 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.374491 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.432531 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.486204 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.534325 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.617064 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.661084 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.709499 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.767934 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.818558 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.875198 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.927985 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1706250745551400u | 1 | 2025-06-17 07:45:55.981257 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.033349 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.083984 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.143481 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.191473 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.244081 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.296662 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.345582 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1706250745551500u | 1 | 2025-06-17 07:45:56.396453 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1706250745551600u | 1 | 2025-06-17 07:45:56.446487 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1706250745551600u | 1 | 2025-06-17 07:45:56.495605 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1706250745551601u | 1 | 2025-06-17 07:45:56.539254 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1706250745551601u | 1 | 2025-06-17 07:45:56.586114 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1706250745551700u | 1 | 2025-06-17 07:45:56.666972 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1706250745551700u | 1 | 2025-06-17 07:45:56.729247 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1706250745551700u | 1 | 2025-06-17 07:45:56.782148 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:56.843484 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:56.921354 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:56.9798 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.035063 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.090467 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.147922 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.23337 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.283716 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1706250745551701u | 1 | 2025-06-17 07:45:57.337525 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1706250745571600u | 1 | 2025-06-17 07:45:58.034829 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1706250745581600u | 1 | 2025-06-17 07:45:58.688438 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1706250745581600u | 1 | 2025-06-17 07:45:58.753374 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-drools-pdp | Waiting for pap port 6969... policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | -- configure -- policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | -- reload -- policy-drools-pdp | + set -x policy-drools-pdp | + systemConfs policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | -- systemConfs -- policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | -- maven -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | -- features -- policy-drools-pdp | + set -x policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | -- security -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | -- scripts -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- policy_exec -- policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | + set -x policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- exec_start -- policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- status --' policy-drools-pdp | + set -x policy-drools-pdp | -- status -- policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | + _PID= policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | + set -x policy-drools-pdp | -- preRunning -- policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | + xargs -I X printf ':%s' X policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.13.0.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.15.11.jar /opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.48.3.jar /opt/app/policy/lib/classgraph-4.8.179.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.10.1.jar /opt/app/policy/lib/commons-cli-1.9.0.jar /opt/app/policy/lib/commons-codec-1.18.0.jar /opt/app/policy/lib/commons-collections-3.2.2.jar /opt/app/policy/lib/commons-collections4-4.5.0-M3.jar /opt/app/policy/lib/commons-configuration2-2.11.0.jar /opt/app/policy/lib/commons-digester-2.1.jar /opt/app/policy/lib/commons-io-2.18.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.17.0.jar /opt/app/policy/lib/commons-logging-1.3.5.jar /opt/app/policy/lib/commons-net-3.11.1.jar /opt/app/policy/lib/commons-text-1.13.0.jar /opt/app/policy/lib/commons-validator-1.8.0.jar /opt/app/policy/lib/core-0.12.4.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.36.0.jar /opt/app/policy/lib/failureaccess-1.0.3.jar /opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-2.12.1.jar /opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.4.6-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/handy-uri-templates-2.1.8.jar /opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar /opt/app/policy/lib/hibernate-core-6.6.16.Final.jar /opt/app/policy/lib/hk2-api-3.0.6.jar /opt/app/policy/lib/hk2-locator-3.0.6.jar /opt/app/policy/lib/hk2-utils-3.0.6.jar /opt/app/policy/lib/httpclient-4.5.13.jar /opt/app/policy/lib/httpcore-4.4.15.jar /opt/app/policy/lib/icu4j-74.2.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-3.0.0.jar /opt/app/policy/lib/jackson-annotations-2.18.3.jar /opt/app/policy/lib/jackson-core-2.18.3.jar /opt/app/policy/lib/jackson-databind-2.18.3.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar /opt/app/policy/lib/jakarta.activation-api-2.1.3.jar /opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-2.6.1.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.1.1.jar /opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-3.2.0.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.30.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar /opt/app/policy/lib/jcodings-1.0.58.jar /opt/app/policy/lib/jersey-client-3.1.10.jar /opt/app/policy/lib/jersey-common-3.1.10.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar /opt/app/policy/lib/jersey-hk2-3.1.10.jar /opt/app/policy/lib/jersey-server-3.1.10.jar /opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar /opt/app/policy/lib/jetty-http-12.0.21.jar /opt/app/policy/lib/jetty-io-12.0.21.jar /opt/app/policy/lib/jetty-security-12.0.21.jar /opt/app/policy/lib/jetty-server-12.0.21.jar /opt/app/policy/lib/jetty-session-12.0.21.jar /opt/app/policy/lib/jetty-util-12.0.21.jar /opt/app/policy/lib/joda-time-2.10.2.jar /opt/app/policy/lib/joni-2.2.1.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsoup-1.17.2.jar /opt/app/policy/lib/jspecify-1.0.0.jar /opt/app/policy/lib/kafka-clients-3.9.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.5.18.jar /opt/app/policy/lib/logback-core-1.5.18.jar /opt/app/policy/lib/lombok-1.18.38.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.43.0.jar /opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar /opt/app/policy/lib/opentelemetry-context-1.43.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.6.0.jar /opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.5.jar /opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.8.jar /opt/app/policy/lib/slf4j-api-2.0.17.jar /opt/app/policy/lib/snakeyaml-2.4.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.29.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + export 'M2_HOME=/home/policy/.m2' policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | [2025-06-17T07:46:24.478+00:00|INFO|LifecycleFsm|main] The mandatory Policy Types are []. Compliance is true policy-drools-pdp | [2025-06-17T07:46:24.483+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [org.onap.policy.drools.lifecycle.LifecycleFeature@2235eaab] policy-drools-pdp | [2025-06-17T07:46:24.498+00:00|INFO|PolicyContainer|main] PolicyContainer.main: configDir=config policy-drools-pdp | [2025-06-17T07:46:24.500+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-17T07:46:24.510+00:00|INFO|IndexedKafkaTopicSourceFactory|main] IndexedKafkaTopicSourceFactory []: no topic for KAFKA Source policy-drools-pdp | [2025-06-17T07:46:24.512+00:00|INFO|IndexedKafkaTopicSinkFactory|main] IndexedKafkaTopicSinkFactory []: no topic for KAFKA Sink policy-drools-pdp | [2025-06-17T07:46:24.838+00:00|INFO|PolicyEngineManager|main] lock manager is org.onap.policy.drools.system.internal.SimpleLockManager@376a312c policy-drools-pdp | [2025-06-17T07:46:24.848+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START policy-drools-pdp | [2025-06-17T07:46:24.860+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-drools-pdp | [2025-06-17T07:46:24.862+00:00|INFO|JettyServletServer|CONFIG-9696] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-drools-pdp | [2025-06-17T07:46:24.871+00:00|INFO|Server|CONFIG-9696] jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 policy-drools-pdp | [2025-06-17T07:46:24.902+00:00|INFO|DefaultSessionIdManager|CONFIG-9696] Session workerName=node0 policy-drools-pdp | [2025-06-17T07:46:24.911+00:00|INFO|ContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 17, 2025 7:46:25 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | [2025-06-17T07:46:25.720+00:00|INFO|GsonMessageBodyHandler|CONFIG-9696] Using GSON for REST calls policy-drools-pdp | [2025-06-17T07:46:25.721+00:00|INFO|JacksonHandler|CONFIG-9696] Using GSON with Jackson behaviors for REST calls policy-drools-pdp | [2025-06-17T07:46:25.723+00:00|INFO|YamlMessageBodyHandler|CONFIG-9696] Accepting YAML for REST calls policy-drools-pdp | [2025-06-17T07:46:25.887+00:00|INFO|ServletContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | [2025-06-17T07:46:25.895+00:00|INFO|AbstractConnector|CONFIG-9696] Started CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696} policy-drools-pdp | [2025-06-17T07:46:25.896+00:00|INFO|Server|CONFIG-9696] Started oejs.Server@3276732{STARTING}[12.0.21,sto=0] @2609ms policy-drools-pdp | [2025-06-17T07:46:25.897+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 8965 ms. policy-drools-pdp | [2025-06-17T07:46:25.908+00:00|INFO|LifecycleFsm|main] lifecycle event: start engine policy-drools-pdp | [2025-06-17T07:46:26.061+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-4043d709-3d23-415f-b317-2540fafae824-1 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 4043d709-3d23-415f-b317-2540fafae824 policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-17T07:46:26.100+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-17T07:46:26.171+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-17T07:46:26.171+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-17T07:46:26.171+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146386169 policy-drools-pdp | [2025-06-17T07:46:26.173+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-1, groupId=4043d709-3d23-415f-b317-2540fafae824] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-17T07:46:26.173+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4043d709-3d23-415f-b317-2540fafae824, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e6308a9 policy-drools-pdp | [2025-06-17T07:46:26.187+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4043d709-3d23-415f-b317-2540fafae824, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-17T07:46:26.188+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-4043d709-3d23-415f-b317-2540fafae824-2 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 4043d709-3d23-415f-b317-2540fafae824 policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-17T07:46:26.188+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-17T07:46:26.198+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-17T07:46:26.198+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-17T07:46:26.198+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146386197 policy-drools-pdp | [2025-06-17T07:46:26.199+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-17T07:46:26.199+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4043d709-3d23-415f-b317-2540fafae824, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-17T07:46:26.202+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=54049ed0-656a-482c-82eb-0b52d11cf96f, alive=false, publisher=null]]: starting policy-drools-pdp | [2025-06-17T07:46:26.215+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-drools-pdp | acks = -1 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | batch.size = 16384 policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | buffer.memory = 33554432 policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = producer-1 policy-drools-pdp | compression.gzip.level = -1 policy-drools-pdp | compression.lz4.level = 9 policy-drools-pdp | compression.type = none policy-drools-pdp | compression.zstd.level = 3 policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | delivery.timeout.ms = 120000 policy-drools-pdp | enable.idempotence = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | linger.ms = 0 policy-drools-pdp | max.block.ms = 60000 policy-drools-pdp | max.in.flight.requests.per.connection = 5 policy-drools-pdp | max.request.size = 1048576 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.max.idle.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partitioner.adaptive.partitioning.enable = true policy-drools-pdp | partitioner.availability.timeout.ms = 0 policy-drools-pdp | partitioner.class = null policy-drools-pdp | partitioner.ignore.keys = false policy-drools-pdp | receive.buffer.bytes = 32768 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retries = 2147483647 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | transaction.timeout.ms = 60000 policy-drools-pdp | transactional.id = null policy-drools-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | policy-drools-pdp | [2025-06-17T07:46:26.216+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-17T07:46:26.225+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-drools-pdp | [2025-06-17T07:46:26.242+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-17T07:46:26.242+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-17T07:46:26.242+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146386242 policy-drools-pdp | [2025-06-17T07:46:26.243+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=54049ed0-656a-482c-82eb-0b52d11cf96f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-drools-pdp | [2025-06-17T07:46:26.245+00:00|INFO|LifecycleStateDefault|main] LifecycleStateTerminated(): state-change from TERMINATED to PASSIVE policy-drools-pdp | [2025-06-17T07:46:26.245+00:00|INFO|LifecycleFsm|pool-2-thread-1] lifecycle event: status policy-drools-pdp | [2025-06-17T07:46:26.246+00:00|INFO|MdcTransactionImpl|main] policy-drools-pdp | [2025-06-17T07:46:26.249+00:00|INFO|Main|main] Started policy-drools-pdp service successfully. policy-drools-pdp | [2025-06-17T07:46:26.264+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-17T07:46:26.562+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-drools-pdp | [2025-06-17T07:46:26.563+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-drools-pdp | [2025-06-17T07:46:26.567+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-drools-pdp | [2025-06-17T07:46:26.568+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] (Re-)joining group policy-drools-pdp | [2025-06-17T07:46:26.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Request joining group due to: need to re-join with the given member-id: consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb policy-drools-pdp | [2025-06-17T07:46:26.584+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] (Re-)joining group policy-drools-pdp | [2025-06-17T07:46:26.597+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-drools-pdp | [2025-06-17T07:46:29.589+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb', protocol='range'} policy-drools-pdp | [2025-06-17T07:46:29.600+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Finished assignment for group at generation 1: {consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb=Assignment(partitions=[policy-pdp-pap-0])} policy-drools-pdp | [2025-06-17T07:46:29.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4043d709-3d23-415f-b317-2540fafae824-2-56ab74f9-b52b-484f-af88-3e8e893180cb', protocol='range'} policy-drools-pdp | [2025-06-17T07:46:29.618+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-drools-pdp | [2025-06-17T07:46:29.620+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Adding newly assigned partitions: policy-pdp-pap-0 policy-drools-pdp | [2025-06-17T07:46:29.629+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Found no committed offset for partition policy-pdp-pap-0 policy-drools-pdp | [2025-06-17T07:46:29.642+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4043d709-3d23-415f-b317-2540fafae824-2, groupId=4043d709-3d23-415f-b317-2540fafae824] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-17T07:46:13.359+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 61 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-17T07:46:13.361+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-17T07:46:14.947+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-17T07:46:15.075+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 113 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-17T07:46:16.119+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-17T07:46:16.132+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-17T07:46:16.134+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-17T07:46:16.134+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-17T07:46:16.194+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-17T07:46:16.194+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2766 ms policy-pap | [2025-06-17T07:46:16.670+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-17T07:46:16.751+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-17T07:46:16.798+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-17T07:46:17.248+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-17T07:46:17.293+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-17T07:46:17.518+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@c96c497 policy-pap | [2025-06-17T07:46:17.521+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-17T07:46:17.623+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-17T07:46:19.634+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-17T07:46:19.637+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-17T07:46:20.872+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-027c289e-8fec-4039-96c7-99c4e91494f7-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 027c289e-8fec-4039-96c7-99c4e91494f7 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-17T07:46:20.935+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:21.075+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:21.075+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:21.076+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146381074 policy-pap | [2025-06-17T07:46:21.078+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-1, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-17T07:46:21.078+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-17T07:46:21.079+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:21.086+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:21.086+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:21.086+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146381086 policy-pap | [2025-06-17T07:46:21.087+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-17T07:46:21.411+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-17T07:46:21.534+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-17T07:46:21.615+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-17T07:46:21.830+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-17T07:46:22.668+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-17T07:46:22.795+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-17T07:46:22.816+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-17T07:46:22.840+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-17T07:46:22.840+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-17T07:46:22.841+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-17T07:46:22.841+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-17T07:46:22.842+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-17T07:46:22.842+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-17T07:46:22.842+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-17T07:46:22.844+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=027c289e-8fec-4039-96c7-99c4e91494f7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4770e50a policy-pap | [2025-06-17T07:46:22.854+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=027c289e-8fec-4039-96c7-99c4e91494f7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-17T07:46:22.855+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 027c289e-8fec-4039-96c7-99c4e91494f7 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-17T07:46:22.856+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:22.864+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:22.864+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:22.864+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146382864 policy-pap | [2025-06-17T07:46:22.864+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-17T07:46:22.865+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-17T07:46:22.865+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3f0c1c4f-353e-4eaf-8bbf-348e0718e8c1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3d1f6213 policy-pap | [2025-06-17T07:46:22.865+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3f0c1c4f-353e-4eaf-8bbf-348e0718e8c1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-17T07:46:22.866+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-17T07:46:22.866+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:22.873+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:22.873+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:22.873+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146382873 policy-pap | [2025-06-17T07:46:22.873+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-17T07:46:22.874+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-17T07:46:22.874+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3f0c1c4f-353e-4eaf-8bbf-348e0718e8c1, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-17T07:46:22.874+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=027c289e-8fec-4039-96c7-99c4e91494f7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-17T07:46:22.874+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a9827d53-97c3-4105-974d-9cc6a99979cc, alive=false, publisher=null]]: starting policy-pap | [2025-06-17T07:46:22.889+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-17T07:46:22.890+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:22.904+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-17T07:46:22.922+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:22.922+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:22.922+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146382922 policy-pap | [2025-06-17T07:46:22.922+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a9827d53-97c3-4105-974d-9cc6a99979cc, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-17T07:46:22.923+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bc731cde-2dfb-4360-b0fe-24a0d545d05f, alive=false, publisher=null]]: starting policy-pap | [2025-06-17T07:46:22.923+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-17T07:46:22.924+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-17T07:46:22.925+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-17T07:46:22.928+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-17T07:46:22.928+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-17T07:46:22.928+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750146382928 policy-pap | [2025-06-17T07:46:22.929+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bc731cde-2dfb-4360-b0fe-24a0d545d05f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-17T07:46:22.929+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-17T07:46:22.929+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-17T07:46:22.932+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-17T07:46:22.932+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-17T07:46:22.937+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-17T07:46:22.937+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-17T07:46:22.937+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-17T07:46:22.938+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-17T07:46:22.938+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-17T07:46:22.938+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-17T07:46:22.942+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-17T07:46:22.942+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.406 seconds (process running for 10.996) policy-pap | [2025-06-17T07:46:23.386+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-pap | [2025-06-17T07:46:23.386+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-pap | [2025-06-17T07:46:23.386+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-17T07:46:23.387+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-pap | [2025-06-17T07:46:23.433+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-17T07:46:23.436+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-17T07:46:23.446+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-17T07:46:23.446+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: PJ3-IXoLThugmvvSQZmOKA policy-pap | [2025-06-17T07:46:23.605+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-17T07:46:23.632+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-17T07:46:24.991+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-17T07:46:24.998+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-17T07:46:25.029+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e policy-pap | [2025-06-17T07:46:25.029+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-17T07:46:25.062+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-17T07:46:25.066+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] (Re-)joining group policy-pap | [2025-06-17T07:46:25.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Request joining group due to: need to re-join with the given member-id: consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913 policy-pap | [2025-06-17T07:46:25.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] (Re-)joining group policy-pap | [2025-06-17T07:46:28.054+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e', protocol='range'} policy-pap | [2025-06-17T07:46:28.065+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-17T07:46:28.081+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913', protocol='range'} policy-pap | [2025-06-17T07:46:28.082+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Finished assignment for group at generation 1: {consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-17T07:46:28.100+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-f2420281-d5de-4e09-85eb-0425311d2f5e', protocol='range'} policy-pap | [2025-06-17T07:46:28.100+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3-72a864b7-0b70-4936-9b2d-7d96ec51e913', protocol='range'} policy-pap | [2025-06-17T07:46:28.101+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-17T07:46:28.101+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-17T07:46:28.107+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-17T07:46:28.107+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-17T07:46:28.125+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-17T07:46:28.125+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-17T07:46:28.146+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-027c289e-8fec-4039-96c7-99c4e91494f7-3, groupId=027c289e-8fec-4039-96c7-99c4e91494f7] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-17T07:46:28.146+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-17T07:46:41.624+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-17T07:46:41.624+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-17T07:46:41.626+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 2 ms postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-17 07:45:44.261 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-17 07:45:44.264 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-17 07:45:44.274 UTC [52] LOG: database system was shut down at 2025-06-17 07:45:43 UTC postgres | 2025-06-17 07:45:44.283 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-17 07:45:45.722 UTC [49] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-17 07:45:45.724 UTC [49] LOG: aborting any active transactions postgres | 2025-06-17 07:45:45.727 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-17 07:45:45.729 UTC [50] LOG: shutting down postgres | 2025-06-17 07:45:45.730 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-17 07:45:46.267 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.384 s, sync=0.148 s, total=0.539 s; sync files=1788, longest=0.008 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-17 07:45:46.278 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-17 07:45:46.381 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-17 07:45:46.381 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-17 07:45:46.381 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-17 07:45:46.385 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-17 07:45:46.393 UTC [102] LOG: database system was shut down at 2025-06-17 07:45:46 UTC postgres | 2025-06-17 07:45:46.400 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-17T07:45:41.052Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-17T07:45:41.052Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-17T07:45:41.052Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-17T07:45:41.053Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-17T07:45:41.056Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-17T07:45:41.057Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-17T07:45:41.061Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-17T07:45:41.061Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-17T07:45:41.066Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-17T07:45:41.066Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.3µs prometheus | time=2025-06-17T07:45:41.066Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-17T07:45:41.066Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=333.603µs prometheus | time=2025-06-17T07:45:41.066Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=27.05µs wal_replay_duration=360.583µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.3µs total_replay_duration=445.904µs prometheus | time=2025-06-17T07:45:41.069Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-17T07:45:41.069Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-17T07:45:41.069Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-17T07:45:41.071Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-17T07:45:41.071Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.34µs remote_storage=3.76µs web_handler=1.21µs query_engine=2.17µs scrape=457.395µs scrape_sd=260.152µs notify=171.061µs notify_sd=17.07µs rules=3.731µs tracing=7.24µs filename=/etc/prometheus/prometheus.yml totalDuration=1.975961ms prometheus | time=2025-06-17T07:45:41.071Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-17T07:45:41.071Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-17 07:45:42,078] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,081] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,081] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,081] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,081] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,082] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-17 07:45:42,082] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-17 07:45:42,082] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-17 07:45:42,082] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-17 07:45:42,084] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-17 07:45:42,084] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,084] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,084] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,084] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,084] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-17 07:45:42,085] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-17 07:45:42,096] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-17 07:45:42,098] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-17 07:45:42,098] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-17 07:45:42,100] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-17 07:45:42,109] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,109] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,110] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,111] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,112] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,112] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-17 07:45:42,113] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,113] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,114] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-17 07:45:42,114] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,115] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-17 07:45:42,117] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,117] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,118] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-17 07:45:42,118] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-17 07:45:42,118] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,139] INFO Logging initialized @408ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-17 07:45:42,196] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-17 07:45:42,196] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-17 07:45:42,212] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-17 07:45:42,255] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-17 07:45:42,255] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-17 07:45:42,256] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-17 07:45:42,259] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-17 07:45:42,269] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-17 07:45:42,285] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-17 07:45:42,285] INFO Started @558ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-17 07:45:42,285] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-17 07:45:42,293] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-17 07:45:42,294] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-17 07:45:42,295] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-17 07:45:42,296] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-17 07:45:42,307] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-17 07:45:42,307] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-17 07:45:42,308] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-17 07:45:42,308] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-17 07:45:42,313] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-17 07:45:42,313] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-17 07:45:42,316] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-17 07:45:42,317] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-17 07:45:42,317] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-17 07:45:42,325] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-17 07:45:42,326] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-17 07:45:42,338] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-17 07:45:42,339] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-17 07:45:43,403] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-drools-pdp Stopping Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2113 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins17629815125663200401.sh ---> sysstat.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins14455412713347829943.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins6209722983512795352.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TEuX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-TEuX/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins11530204890759206835.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/config12327586523277525584tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins11837536120095978474.sh ---> create-netrc.sh [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins10328672468906677274.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TEuX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-TEuX/bin to PATH [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins9793478419342457228.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins17608903606560151098.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TEuX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-TEuX/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash -l /tmp/jenkins8544059192986833605.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TEuX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-TEuX/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-drools-pdp/2036 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21755 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 876 23671 0 7619 30835 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:7f:50:eb brd ff:ff:ff:ff:ff:ff inet 10.30.107.88/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86043sec preferred_lft 86043sec inet6 fe80::f816:3eff:fe7f:50eb/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:11:34:1f:f5 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:11ff:fe34:1ff5/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21755) 06/17/25 _x86_64_ (8 CPU) 07:43:01 LINUX RESTART (8 CPU) 07:44:01 tps rtps wtps bread/s bwrtn/s 07:45:01 234.36 21.21 213.15 2275.51 111402.73 07:46:01 675.24 3.88 671.35 462.06 176732.28 07:47:01 100.82 0.20 100.62 22.26 47081.49 07:48:01 195.82 0.28 195.53 29.60 51089.35 Average: 301.55 6.39 295.16 697.42 96577.08 07:44:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:45:01 24856348 31529508 8082872 24.54 131004 6657944 2510168 7.39 1102072 6436992 2741932 07:46:01 23888972 30846556 9050248 27.48 164016 6867636 6574164 19.34 1959280 6437888 132 07:47:01 22195056 29658676 10744164 32.62 180876 7324144 8699064 25.59 3266148 6763960 33004 07:48:01 23152860 30594752 9786360 29.71 206708 7265476 5760868 16.95 2362656 6721668 408 Average: 23523309 30657373 9415911 28.59 170651 7028800 5886066 17.32 2172539 6590127 693869 07:44:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 07:45:01 ens3 1379.34 812.86 38679.39 66.07 0.00 0.00 0.00 0.00 07:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:45:01 lo 13.60 13.60 1.26 1.26 0.00 0.00 0.00 0.00 07:46:01 br-8736cebc22fd 45.41 55.91 2.73 312.05 0.00 0.00 0.00 0.00 07:46:01 ens3 68.07 57.11 314.74 6.36 0.00 0.00 0.00 0.00 07:46:01 veth236e716 1.53 1.60 0.16 0.17 0.00 0.00 0.00 0.00 07:46:01 vethf1a7ee9 45.33 56.01 3.35 312.06 0.00 0.00 0.00 0.03 07:47:01 br-8736cebc22fd 0.47 0.40 0.03 0.03 0.00 0.00 0.00 0.00 07:47:01 ens3 171.35 109.37 1900.54 8.06 0.00 0.00 0.00 0.00 07:47:01 veth236e716 13.18 10.46 1.51 1.59 0.00 0.00 0.00 0.00 07:47:01 vethf1a7ee9 0.27 0.42 0.02 0.02 0.00 0.00 0.00 0.00 07:48:01 br-8736cebc22fd 0.37 0.18 0.02 0.01 0.00 0.00 0.00 0.00 07:48:01 ens3 71.19 53.52 302.08 19.65 0.00 0.00 0.00 0.00 07:48:01 veth236e716 13.38 9.28 1.02 1.31 0.00 0.00 0.00 0.00 07:48:01 vethebf847b 4.30 6.42 0.73 0.88 0.00 0.00 0.00 0.00 Average: br-8736cebc22fd 11.56 14.12 0.70 78.02 0.00 0.00 0.00 0.00 Average: ens3 422.53 258.24 10300.37 25.04 0.00 0.00 0.00 0.00 Average: veth236e716 7.02 5.34 0.67 0.77 0.00 0.00 0.00 0.00 Average: vethebf847b 1.07 1.60 0.18 0.22 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21755) 06/17/25 _x86_64_ (8 CPU) 07:43:01 LINUX RESTART (8 CPU) 07:44:01 CPU %user %nice %system %iowait %steal %idle 07:45:01 all 17.32 0.00 6.02 7.17 0.06 69.42 07:45:01 0 12.77 0.00 5.63 2.55 0.03 79.01 07:45:01 1 11.64 0.00 6.62 11.45 0.07 70.22 07:45:01 2 14.67 0.00 5.20 2.75 0.05 77.32 07:45:01 3 31.45 0.00 6.19 4.80 0.08 57.48 07:45:01 4 27.39 0.00 6.35 12.44 0.07 53.76 07:45:01 5 16.57 0.00 6.00 4.00 0.07 73.36 07:45:01 6 13.01 0.00 5.72 16.26 0.08 64.93 07:45:01 7 11.11 0.00 6.43 3.18 0.03 79.25 07:46:01 all 12.34 0.00 4.59 11.03 0.06 71.98 07:46:01 0 13.43 0.00 4.65 6.61 0.07 75.24 07:46:01 1 12.17 0.00 6.14 40.71 0.07 40.91 07:46:01 2 12.19 0.00 4.28 4.72 0.07 78.75 07:46:01 3 13.05 0.00 4.26 3.55 0.05 79.09 07:46:01 4 11.64 0.00 3.89 11.45 0.05 72.97 07:46:01 5 9.68 0.00 4.80 5.12 0.05 80.36 07:46:01 6 12.08 0.00 4.16 7.36 0.03 76.36 07:46:01 7 14.46 0.00 4.55 8.74 0.07 72.19 07:47:01 all 23.94 0.00 2.59 2.01 0.08 71.38 07:47:01 0 18.32 0.00 2.28 1.64 0.08 77.68 07:47:01 1 25.12 0.00 2.72 6.95 0.10 65.12 07:47:01 2 27.44 0.00 2.63 0.18 0.08 69.66 07:47:01 3 23.18 0.00 2.29 3.14 0.07 71.32 07:47:01 4 20.22 0.00 2.95 1.81 0.07 74.95 07:47:01 5 29.75 0.00 2.62 0.24 0.07 67.33 07:47:01 6 24.81 0.00 2.71 1.57 0.08 70.82 07:47:01 7 22.68 0.00 2.50 0.57 0.07 74.18 07:48:01 all 6.14 0.00 1.94 2.84 0.05 89.03 07:48:01 0 4.74 0.00 1.69 0.42 0.03 93.11 07:48:01 1 9.24 0.00 2.19 11.09 0.05 77.43 07:48:01 2 5.13 0.00 1.22 0.15 0.03 93.47 07:48:01 3 3.85 0.00 1.05 0.05 0.05 95.00 07:48:01 4 6.09 0.00 2.13 5.36 0.05 86.37 07:48:01 5 6.59 0.00 2.08 0.89 0.05 90.39 07:48:01 6 5.14 0.00 2.97 4.31 0.07 87.51 07:48:01 7 8.32 0.00 2.21 0.50 0.05 88.91 Average: all 14.93 0.00 3.78 5.76 0.06 75.47 Average: 0 12.31 0.00 3.56 2.81 0.05 81.26 Average: 1 14.54 0.00 4.41 17.54 0.07 63.43 Average: 2 14.84 0.00 3.33 1.95 0.06 79.82 Average: 3 17.88 0.00 3.45 2.88 0.06 75.73 Average: 4 16.32 0.00 3.82 7.75 0.06 72.05 Average: 5 15.64 0.00 3.87 2.56 0.06 77.87 Average: 6 13.76 0.00 3.89 7.37 0.07 74.91 Average: 7 14.14 0.00 3.92 3.25 0.05 78.63