Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141264 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-20901 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-FjrNrkGdrV3d/agent.2103 SSH_AGENT_PID=2105 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/private_key_1015307530348345625.key (/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/private_key_1015307530348345625.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/64/141264/1 # timeout=30 > git rev-parse 473f78ecac5fb75e5968b31a5bab95eaba72c803^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/changes/64/141264/1) > git config core.sparsecheckout # timeout=10 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 Commit message: "Add Fix fail handling in ACM runtime in CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins3986876662766761443.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ijNI lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ijNI/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-ijNI/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/sh /tmp/jenkins17726322881329663871.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/sh -xe /tmp/jenkins17536692435520687.sh + /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/csit/run-project-csit.sh drools-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 17 60.2M 17 10.7M 0 0 26.1M 0 0:00:02 --:--:-- 0:00:02 26.1M 100 60.2M 100 60.2M 0 0 66.4M 0 --:--:-- --:--:-- --:--:-- 100M Setting project configuration for: drools-pdp Configuring docker compose... Starting drools-pdp using postgres + Grafana/Prometheus policy-db-migrator Pulling drools-pdp Pulling pap Pulling kafka Pulling postgres Pulling api Pulling grafana Pulling zookeeper Pulling prometheus Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer d3165a332ae3 Waiting c124ba1a8b26 Waiting 1ec5fb03eaee Waiting 6394804c2196 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer 0d92cad902ba Waiting 5e06c6bed798 Waiting dcc0c3b2850c Pulling fs layer 684be6598fc9 Waiting eb7cda286a15 Pulling fs layer dcc0c3b2850c Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Download complete da9db072f522 Pulling fs layer 56aca8a42329 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer 56aca8a42329 Waiting f243361b999b Pulling fs layer 7abf0dc59d35 Pulling fs layer b56567b07821 Waiting f243361b999b Waiting 991de477d40a Pulling fs layer 5efc16ba9cdc Pulling fs layer 7abf0dc59d35 Waiting 5efc16ba9cdc Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB f18232174bc9 Pulling fs layer 65babbe3dfe5 Pulling fs layer 651b0ba49b07 Pulling fs layer d953cde4314b Pulling fs layer aecd4cb03450 Pulling fs layer 13fa68ca8757 Pulling fs layer f836d47fdc4d Pulling fs layer 8b5292c940e1 Pulling fs layer 454a4350d439 Pulling fs layer 65babbe3dfe5 Waiting 9a8c18aee5ea Pulling fs layer 651b0ba49b07 Waiting d953cde4314b Waiting aecd4cb03450 Waiting 13fa68ca8757 Waiting f18232174bc9 Waiting f836d47fdc4d Waiting 8b5292c940e1 Waiting 454a4350d439 Waiting 9a8c18aee5ea Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer 1e017ebebdbd Waiting b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 55f2b468da67 Waiting 5cfb27c10ea5 Pulling fs layer 82bfc142787e Waiting b0e0ef7895f4 Waiting 46baca71a4ef Waiting 40a5eed61bb0 Pulling fs layer c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting 1ec5fb03eaee Download complete e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 09d5a3f70313 Waiting e040ea11fa10 Waiting 356f5c2c843b Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer 9fa9226be034 Waiting bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 1617e25568b2 Waiting 7221d93db8a9 Waiting 44986281b8b9 Waiting 6ac0e4adf315 Waiting bf70c5107ab5 Waiting f3b09c502777 Waiting 7df673c7455d Waiting 408012a7b118 Waiting 1ccde423731d Waiting d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer e444bcd4d577 Waiting f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer eabd8714fec9 Waiting 45fd2fec8a19 Waiting c955f6e31a04 Pulling fs layer 8f10199ed94b Waiting f3a82e9f1761 Waiting 79161a3f5362 Waiting f963a77d2726 Waiting eca0188f477e Waiting 71a9f6a9ab4d Waiting 9c266ba63f51 Waiting c955f6e31a04 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting da3ed5db7103 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer e27c75a98748 Waiting a83b68436f09 Waiting 4b82842ab819 Waiting 13ff0988aaea Waiting e73cb4a42719 Waiting 7e568a0dc8fb Waiting 01e0882c90d9 Waiting 2d429b9e73a6 Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting 46eab5b44a35 Waiting c4d302cc468d Waiting 12c5c803443f Waiting da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Verifying Checksum da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB c124ba1a8b26 Downloading [====> ] 7.568MB/91.87MB 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB dcc0c3b2850c Downloading [==> ] 4.324MB/76.12MB c124ba1a8b26 Downloading [=======> ] 14.06MB/91.87MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB dcc0c3b2850c Downloading [=====> ] 8.65MB/76.12MB c124ba1a8b26 Downloading [===========> ] 21.63MB/91.87MB 96e38c8865ba Downloading [====================================> ] 52.44MB/71.91MB 96e38c8865ba Downloading [====================================> ] 52.44MB/71.91MB dcc0c3b2850c Downloading [========> ] 12.98MB/76.12MB c124ba1a8b26 Downloading [================> ] 30.82MB/91.87MB 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete eb7cda286a15 Download complete dcc0c3b2850c Downloading [============> ] 18.92MB/76.12MB c124ba1a8b26 Downloading [======================> ] 41.09MB/91.87MB da9db072f522 Already exists 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer 7ce9630189bb Pulling fs layer 2d7f854c01cf Pulling fs layer 8e665a4a2af9 Pulling fs layer 219d845251ba Pulling fs layer 4ba79830ebce Waiting 2d7f854c01cf Waiting 8e665a4a2af9 Waiting d223479d7367 Waiting 7ce9630189bb Waiting 219d845251ba Waiting 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [==================> ] 28.11MB/76.12MB c124ba1a8b26 Downloading [==============================> ] 56.23MB/91.87MB 56aca8a42329 Downloading [==> ] 3.784MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB dcc0c3b2850c Downloading [========================> ] 37.31MB/76.12MB c124ba1a8b26 Downloading [=======================================> ] 72.99MB/91.87MB 56aca8a42329 Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB dcc0c3b2850c Downloading [=================================> ] 51.36MB/76.12MB c124ba1a8b26 Downloading [================================================> ] 89.75MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 56aca8a42329 Downloading [==========> ] 15.68MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB fbe227156a9a Downloading [> ] 146.4kB/14.63MB dcc0c3b2850c Downloading [========================================> ] 62.18MB/76.12MB 56aca8a42329 Downloading [==================> ] 26.49MB/71.91MB 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete fbe227156a9a Downloading [===============> ] 4.57MB/14.63MB b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Download complete 7abf0dc59d35 Verifying Checksum 7abf0dc59d35 Download complete 56aca8a42329 Downloading [=============================> ] 42.17MB/71.91MB 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 991de477d40a Download complete 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Download complete fbe227156a9a Downloading [=======================================> ] 11.5MB/14.63MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB fbe227156a9a Verifying Checksum fbe227156a9a Download complete 56aca8a42329 Downloading [========================================> ] 58.39MB/71.91MB 65babbe3dfe5 Download complete 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB d953cde4314b Downloading [> ] 97.22kB/8.735MB 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Verifying Checksum 651b0ba49b07 Download complete 56aca8a42329 Download complete aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Download complete aecd4cb03450 Download complete f836d47fdc4d Downloading [> ] 539.6kB/107.3MB 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB f18232174bc9 Extracting [===========> ] 852kB/3.642MB d953cde4314b Verifying Checksum d953cde4314b Download complete 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 454a4350d439 Download complete 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Verifying Checksum 9a8c18aee5ea Download complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f836d47fdc4d Downloading [====> ] 10.27MB/107.3MB 8b5292c940e1 Downloading [=======> ] 9.731MB/63.48MB 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB f18232174bc9 Pull complete 65babbe3dfe5 Extracting [==================================================>] 141B/141B 65babbe3dfe5 Extracting [==================================================>] 141B/141B 1e017ebebdbd Downloading [=====> ] 3.767MB/37.19MB 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB f836d47fdc4d Downloading [==========> ] 23.25MB/107.3MB 8b5292c940e1 Downloading [===============> ] 19.46MB/63.48MB 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 65babbe3dfe5 Pull complete 56aca8a42329 Extracting [======> ] 8.913MB/71.91MB f836d47fdc4d Downloading [==================> ] 38.93MB/107.3MB 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 8b5292c940e1 Downloading [=======================> ] 30.28MB/63.48MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 1e017ebebdbd Downloading [=====================> ] 16.2MB/37.19MB 56aca8a42329 Extracting [========> ] 11.7MB/71.91MB f836d47fdc4d Downloading [=========================> ] 55.15MB/107.3MB 8b5292c940e1 Downloading [================================> ] 41.63MB/63.48MB 651b0ba49b07 Extracting [====> ] 327.7kB/3.524MB 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 1e017ebebdbd Downloading [=================================> ] 24.87MB/37.19MB f836d47fdc4d Downloading [=================================> ] 71.37MB/107.3MB 56aca8a42329 Extracting [==========> ] 15.6MB/71.91MB 8b5292c940e1 Downloading [========================================> ] 51.9MB/63.48MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 1e017ebebdbd Downloading [==============================================> ] 34.29MB/37.19MB f836d47fdc4d Downloading [=======================================> ] 85.43MB/107.3MB 56aca8a42329 Extracting [=============> ] 19.5MB/71.91MB 8b5292c940e1 Downloading [================================================> ] 61.09MB/63.48MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 8b5292c940e1 Verifying Checksum 8b5292c940e1 Download complete 651b0ba49b07 Pull complete d953cde4314b Extracting [> ] 98.3kB/8.735MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB f836d47fdc4d Downloading [==============================================> ] 100.6MB/107.3MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 56aca8a42329 Extracting [===============> ] 22.84MB/71.91MB f836d47fdc4d Verifying Checksum f836d47fdc4d Download complete 55f2b468da67 Downloading [=> ] 5.406MB/257.9MB 82bfc142787e Downloading [============================> ] 4.914MB/8.613MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB d953cde4314b Extracting [==> ] 393.2kB/8.735MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 5cfb27c10ea5 Download complete 1e017ebebdbd Extracting [=====> ] 3.932MB/37.19MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 56aca8a42329 Extracting [==================> ] 26.74MB/71.91MB e040ea11fa10 Download complete 55f2b468da67 Downloading [===> ] 19.46MB/257.9MB 96e38c8865ba Extracting [==============================================> ] 66.85MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 66.85MB/71.91MB d953cde4314b Extracting [=======================> ] 4.03MB/8.735MB b0e0ef7895f4 Downloading [=========> ] 7.159MB/37.01MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 56aca8a42329 Extracting [=====================> ] 30.64MB/71.91MB 55f2b468da67 Downloading [======> ] 32.98MB/257.9MB d953cde4314b Extracting [=======================================> ] 6.98MB/8.735MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB b0e0ef7895f4 Downloading [=================> ] 13.19MB/37.01MB 09d5a3f70313 Downloading [=> ] 2.702MB/109.2MB 1e017ebebdbd Extracting [==============> ] 10.62MB/37.19MB d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB 55f2b468da67 Downloading [========> ] 45.42MB/257.9MB d953cde4314b Pull complete 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB b0e0ef7895f4 Downloading [================================> ] 23.74MB/37.01MB 09d5a3f70313 Downloading [===> ] 7.028MB/109.2MB 1e017ebebdbd Extracting [==================> ] 13.76MB/37.19MB 56aca8a42329 Extracting [=========================> ] 37.32MB/71.91MB 55f2b468da67 Downloading [===========> ] 60.01MB/257.9MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B b0e0ef7895f4 Downloading [=============================================> ] 33.54MB/37.01MB 09d5a3f70313 Downloading [=====> ] 12.43MB/109.2MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Download complete 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB 56aca8a42329 Extracting [=============================> ] 41.78MB/71.91MB 55f2b468da67 Downloading [==============> ] 74.61MB/257.9MB 9fa9226be034 Downloading [> ] 15.3kB/783kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 09d5a3f70313 Downloading [==========> ] 22.17MB/109.2MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1e017ebebdbd Extracting [============================> ] 20.84MB/37.19MB 56aca8a42329 Extracting [===============================> ] 45.68MB/71.91MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 55f2b468da67 Downloading [=================> ] 91.37MB/257.9MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 09d5a3f70313 Downloading [===============> ] 34.06MB/109.2MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 1e017ebebdbd Extracting [=================================> ] 24.77MB/37.19MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 55f2b468da67 Downloading [===================> ] 102.7MB/257.9MB 56aca8a42329 Extracting [==================================> ] 49.02MB/71.91MB aecd4cb03450 Pull complete 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 6ac0e4adf315 Downloading [=> ] 2.162MB/62.07MB 1e017ebebdbd Extracting [======================================> ] 28.7MB/37.19MB 5e06c6bed798 Pull complete e5d7009d9e55 Pull complete 55f2b468da67 Downloading [=======================> ] 120MB/257.9MB 56aca8a42329 Extracting [====================================> ] 51.81MB/71.91MB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 09d5a3f70313 Downloading [=============================> ] 63.8MB/109.2MB 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB 9fa9226be034 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1e017ebebdbd Extracting [==========================================> ] 31.46MB/37.19MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 55f2b468da67 Downloading [==========================> ] 136.8MB/257.9MB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09d5a3f70313 Downloading [======================================> ] 83.26MB/109.2MB 56aca8a42329 Extracting [======================================> ] 55.71MB/71.91MB 6ac0e4adf315 Downloading [=====> ] 7.028MB/62.07MB 55f2b468da67 Downloading [=============================> ] 152.5MB/257.9MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 13fa68ca8757 Pull complete 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 1ec5fb03eaee Pull complete 09d5a3f70313 Downloading [===========================================> ] 95.7MB/109.2MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 6ac0e4adf315 Downloading [========> ] 10.27MB/62.07MB 56aca8a42329 Extracting [========================================> ] 57.93MB/71.91MB 55f2b468da67 Downloading [===============================> ] 163.3MB/257.9MB f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09d5a3f70313 Downloading [=================================================> ] 107.1MB/109.2MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 6ac0e4adf315 Downloading [=============> ] 16.22MB/62.07MB 56aca8a42329 Extracting [==========================================> ] 61.28MB/71.91MB 55f2b468da67 Downloading [==================================> ] 177.9MB/257.9MB 0d92cad902ba Pull complete f3b09c502777 Downloading [> ] 539.6kB/56.52MB f836d47fdc4d Extracting [=> ] 2.785MB/107.3MB 1e017ebebdbd Extracting [=================================================> ] 36.96MB/37.19MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB d3165a332ae3 Pull complete 6ac0e4adf315 Downloading [======================> ] 27.57MB/62.07MB 1617e25568b2 Pull complete 55f2b468da67 Downloading [=====================================> ] 192.5MB/257.9MB 1e017ebebdbd Pull complete 56aca8a42329 Extracting [============================================> ] 63.5MB/71.91MB f3b09c502777 Downloading [====> ] 4.865MB/56.52MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB f836d47fdc4d Extracting [==> ] 5.571MB/107.3MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 6ac0e4adf315 Downloading [===============================> ] 39.47MB/62.07MB 55f2b468da67 Downloading [========================================> ] 208.7MB/257.9MB 56aca8a42329 Extracting [==============================================> ] 67.4MB/71.91MB f3b09c502777 Downloading [============> ] 14.6MB/56.52MB dcc0c3b2850c Extracting [======> ] 9.47MB/76.12MB f836d47fdc4d Extracting [====> ] 10.03MB/107.3MB c124ba1a8b26 Extracting [====> ] 8.913MB/91.87MB 6ac0e4adf315 Downloading [==========================================> ] 52.44MB/62.07MB 55f2b468da67 Downloading [============================================> ] 228.2MB/257.9MB 56aca8a42329 Extracting [=================================================> ] 71.3MB/71.91MB f3b09c502777 Downloading [=======================> ] 26.49MB/56.52MB dcc0c3b2850c Extracting [===========> ] 17.83MB/76.12MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete f836d47fdc4d Extracting [=====> ] 12.81MB/107.3MB c124ba1a8b26 Extracting [========> ] 16.15MB/91.87MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 55f2b468da67 Downloading [==============================================> ] 242.2MB/257.9MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Download complete f3b09c502777 Downloading [=================================> ] 37.85MB/56.52MB dcc0c3b2850c Extracting [================> ] 25.07MB/76.12MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete f836d47fdc4d Extracting [======> ] 14.48MB/107.3MB 56aca8a42329 Pull complete fbe227156a9a Extracting [> ] 163.8kB/14.63MB c124ba1a8b26 Extracting [============> ] 22.28MB/91.87MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 55f2b468da67 Downloading [=================================================> ] 253MB/257.9MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete f3b09c502777 Downloading [===========================================> ] 48.66MB/56.52MB dcc0c3b2850c Extracting [=====================> ] 33.42MB/76.12MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Download complete eabd8714fec9 Downloading [> ] 539.6kB/375MB fbe227156a9a Extracting [=> ] 491.5kB/14.63MB f836d47fdc4d Extracting [=======> ] 16.15MB/107.3MB c124ba1a8b26 Extracting [================> ] 29.52MB/91.87MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete eca0188f477e Downloading [======> ] 4.521MB/37.17MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete 8f10199ed94b Downloading [> ] 97.22kB/8.768MB dcc0c3b2850c Extracting [===========================> ] 41.78MB/76.12MB 6ac0e4adf315 Extracting [====> ] 6.128MB/62.07MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eabd8714fec9 Downloading [=> ] 7.568MB/375MB fbe227156a9a Extracting [==========> ] 2.949MB/14.63MB c124ba1a8b26 Extracting [===================> ] 35.09MB/91.87MB eca0188f477e Downloading [====================> ] 15.07MB/37.17MB f836d47fdc4d Extracting [========> ] 17.27MB/107.3MB 8f10199ed94b Downloading [====================> ] 3.636MB/8.768MB dcc0c3b2850c Extracting [===============================> ] 47.35MB/76.12MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 55f2b468da67 Extracting [=> ] 9.47MB/257.9MB eabd8714fec9 Downloading [==> ] 15.14MB/375MB c124ba1a8b26 Extracting [=======================> ] 42.89MB/91.87MB fbe227156a9a Extracting [=================> ] 5.079MB/14.63MB eca0188f477e Downloading [=================================> ] 25.25MB/37.17MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete dcc0c3b2850c Extracting [==================================> ] 52.92MB/76.12MB 6ac0e4adf315 Extracting [========> ] 10.58MB/62.07MB f836d47fdc4d Extracting [========> ] 18.38MB/107.3MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 55f2b468da67 Extracting [===> ] 16.15MB/257.9MB eabd8714fec9 Downloading [===> ] 24.33MB/375MB fbe227156a9a Extracting [======================> ] 6.717MB/14.63MB c124ba1a8b26 Extracting [===========================> ] 50.14MB/91.87MB eca0188f477e Downloading [===============================================> ] 35.04MB/37.17MB eca0188f477e Verifying Checksum eca0188f477e Download complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete dcc0c3b2850c Extracting [=====================================> ] 56.82MB/76.12MB f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB 6ac0e4adf315 Extracting [==========> ] 12.81MB/62.07MB 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete f836d47fdc4d Extracting [==========> ] 22.28MB/107.3MB eabd8714fec9 Downloading [====> ] 33.52MB/375MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete c124ba1a8b26 Extracting [===============================> ] 58.49MB/91.87MB fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB dcc0c3b2850c Extracting [========================================> ] 62.39MB/76.12MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete f3a82e9f1761 Downloading [============> ] 11.47MB/44.41MB eabd8714fec9 Downloading [=====> ] 44.33MB/375MB f836d47fdc4d Extracting [===========> ] 25.62MB/107.3MB 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB eca0188f477e Extracting [> ] 393.2kB/37.17MB fbe227156a9a Extracting [====================================> ] 10.65MB/14.63MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB c124ba1a8b26 Extracting [===================================> ] 64.62MB/91.87MB 6ac0e4adf315 Extracting [===========> ] 14.48MB/62.07MB dcc0c3b2850c Extracting [===========================================> ] 66.29MB/76.12MB f3a82e9f1761 Downloading [========================> ] 22.02MB/44.41MB eabd8714fec9 Downloading [=======> ] 56.77MB/375MB f836d47fdc4d Extracting [=============> ] 28.97MB/107.3MB eca0188f477e Extracting [=====> ] 3.932MB/37.17MB c124ba1a8b26 Extracting [=====================================> ] 69.07MB/91.87MB da3ed5db7103 Downloading [> ] 2.162MB/127.4MB fbe227156a9a Extracting [=======================================> ] 11.63MB/14.63MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB f3a82e9f1761 Downloading [====================================> ] 32.11MB/44.41MB dcc0c3b2850c Extracting [==============================================> ] 70.19MB/76.12MB eabd8714fec9 Downloading [========> ] 66.5MB/375MB f836d47fdc4d Extracting [===============> ] 33.42MB/107.3MB c124ba1a8b26 Extracting [=======================================> ] 72.97MB/91.87MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB 55f2b468da67 Extracting [======> ] 32.31MB/257.9MB fbe227156a9a Extracting [=========================================> ] 12.29MB/14.63MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB f3a82e9f1761 Downloading [================================================> ] 43.12MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eabd8714fec9 Downloading [=========> ] 74.61MB/375MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB c124ba1a8b26 Extracting [=========================================> ] 76.87MB/91.87MB dcc0c3b2850c Pull complete 55f2b468da67 Extracting [=======> ] 41.22MB/257.9MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB f836d47fdc4d Extracting [================> ] 35.09MB/107.3MB fbe227156a9a Pull complete da3ed5db7103 Downloading [==> ] 7.568MB/127.4MB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB eabd8714fec9 Downloading [===========> ] 85.97MB/375MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 2d429b9e73a6 Downloading [=======> ] 4.423MB/29.13MB c124ba1a8b26 Extracting [==============================================> ] 84.67MB/91.87MB 55f2b468da67 Extracting [=========> ] 49.58MB/257.9MB da3ed5db7103 Downloading [======> ] 16.22MB/127.4MB f836d47fdc4d Extracting [=================> ] 37.32MB/107.3MB eca0188f477e Extracting [==================> ] 13.76MB/37.17MB eabd8714fec9 Downloading [============> ] 97.32MB/375MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 2d429b9e73a6 Downloading [====================> ] 12.09MB/29.13MB 55f2b468da67 Extracting [===========> ] 56.82MB/257.9MB c124ba1a8b26 Extracting [=================================================> ] 91.36MB/91.87MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB b56567b07821 Pull complete eb7cda286a15 Pull complete f243361b999b Extracting [==================================================>] 5.242kB/5.242kB da3ed5db7103 Downloading [=========> ] 25.41MB/127.4MB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB eca0188f477e Extracting [======================> ] 16.52MB/37.17MB f836d47fdc4d Extracting [==================> ] 40.11MB/107.3MB eabd8714fec9 Downloading [==============> ] 110.8MB/375MB c124ba1a8b26 Pull complete api Pulled 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 2d429b9e73a6 Downloading [=====================================> ] 22.12MB/29.13MB 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB 55f2b468da67 Extracting [============> ] 62.95MB/257.9MB da3ed5db7103 Downloading [=============> ] 34.6MB/127.4MB eca0188f477e Extracting [==========================> ] 19.66MB/37.17MB f836d47fdc4d Extracting [===================> ] 42.89MB/107.3MB eabd8714fec9 Downloading [================> ] 122.7MB/375MB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 2d429b9e73a6 Downloading [===============================================> ] 27.72MB/29.13MB 55f2b468da67 Extracting [==============> ] 72.97MB/257.9MB 6394804c2196 Pull complete 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete pap Pulled da3ed5db7103 Downloading [=================> ] 43.79MB/127.4MB eca0188f477e Extracting [==============================> ] 22.81MB/37.17MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete eabd8714fec9 Downloading [==================> ] 135.2MB/375MB f836d47fdc4d Extracting [=====================> ] 46.24MB/107.3MB f243361b999b Pull complete c4d302cc468d Downloading [> ] 48.06kB/4.534MB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 55f2b468da67 Extracting [===============> ] 79.66MB/257.9MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB da3ed5db7103 Downloading [======================> ] 57.31MB/127.4MB eca0188f477e Extracting [==================================> ] 25.95MB/37.17MB eabd8714fec9 Downloading [===================> ] 147.6MB/375MB f836d47fdc4d Extracting [======================> ] 49.02MB/107.3MB c4d302cc468d Downloading [=======================================> ] 3.538MB/4.534MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 6ac0e4adf315 Extracting [=================================> ] 41.22MB/62.07MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 55f2b468da67 Extracting [================> ] 86.34MB/257.9MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete da3ed5db7103 Downloading [===========================> ] 70.29MB/127.4MB eca0188f477e Extracting [=======================================> ] 29.1MB/37.17MB eabd8714fec9 Downloading [=====================> ] 161.1MB/375MB 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB 7abf0dc59d35 Pull complete 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB f836d47fdc4d Extracting [========================> ] 51.81MB/107.3MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 6ac0e4adf315 Extracting [=======================================> ] 48.46MB/62.07MB 55f2b468da67 Extracting [===================> ] 98.04MB/257.9MB da3ed5db7103 Downloading [=================================> ] 85.97MB/127.4MB eabd8714fec9 Downloading [=======================> ] 175.2MB/375MB 2d429b9e73a6 Extracting [========> ] 5.014MB/29.13MB eca0188f477e Extracting [============================================> ] 33.03MB/37.17MB 6ac0e4adf315 Extracting [=============================================> ] 56.26MB/62.07MB f836d47fdc4d Extracting [=========================> ] 54.03MB/107.3MB 531ee2cf3c0c Downloading [======> ] 1.064MB/8.066MB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB da3ed5db7103 Downloading [=======================================> ] 100MB/127.4MB eabd8714fec9 Downloading [========================> ] 182.2MB/375MB 991de477d40a Pull complete 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 2d429b9e73a6 Extracting [============> ] 7.373MB/29.13MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 531ee2cf3c0c Downloading [============> ] 2.047MB/8.066MB 6ac0e4adf315 Extracting [================================================> ] 60.72MB/62.07MB f836d47fdc4d Extracting [===========================> ] 57.93MB/107.3MB 55f2b468da67 Extracting [====================> ] 107MB/257.9MB da3ed5db7103 Downloading [============================================> ] 114.6MB/127.4MB eabd8714fec9 Downloading [=========================> ] 194.1MB/375MB 531ee2cf3c0c Downloading [============================> ] 4.586MB/8.066MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB f836d47fdc4d Extracting [============================> ] 61.28MB/107.3MB da3ed5db7103 Downloading [=================================================> ] 125.4MB/127.4MB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB eabd8714fec9 Downloading [===========================> ] 207.1MB/375MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 5efc16ba9cdc Pull complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete 2d429b9e73a6 Extracting [===================> ] 11.5MB/29.13MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete f836d47fdc4d Extracting [=============================> ] 64.06MB/107.3MB eabd8714fec9 Downloading [============================> ] 217.3MB/375MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete policy-db-migrator Pulled eca0188f477e Pull complete 6ac0e4adf315 Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 2d429b9e73a6 Extracting [==========================> ] 15.34MB/29.13MB e73cb4a42719 Downloading [====> ] 10.27MB/109.1MB f836d47fdc4d Extracting [===============================> ] 66.85MB/107.3MB 55f2b468da67 Extracting [======================> ] 117MB/257.9MB eabd8714fec9 Downloading [===============================> ] 233MB/375MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB e444bcd4d577 Pull complete 2d429b9e73a6 Extracting [=================================> ] 19.76MB/29.13MB e73cb4a42719 Downloading [=========> ] 21.09MB/109.1MB f836d47fdc4d Extracting [================================> ] 70.19MB/107.3MB eabd8714fec9 Downloading [=================================> ] 249.8MB/375MB 55f2b468da67 Extracting [=======================> ] 119.8MB/257.9MB f3b09c502777 Extracting [===> ] 3.899MB/56.52MB 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB e73cb4a42719 Downloading [================> ] 35.14MB/109.1MB eabd8714fec9 Downloading [===================================> ] 264.9MB/375MB 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB f836d47fdc4d Extracting [==================================> ] 73.53MB/107.3MB f3b09c502777 Extracting [=====> ] 6.685MB/56.52MB e73cb4a42719 Downloading [======================> ] 48.12MB/109.1MB eabd8714fec9 Downloading [=====================================> ] 278.4MB/375MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 55f2b468da67 Extracting [=========================> ] 129.2MB/257.9MB 4ba79830ebce Downloading [> ] 539.6kB/166.8MB f836d47fdc4d Extracting [====================================> ] 77.43MB/107.3MB f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB e73cb4a42719 Downloading [============================> ] 61.09MB/109.1MB eabd8714fec9 Downloading [=======================================> ] 293MB/375MB 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 55f2b468da67 Extracting [=========================> ] 132.6MB/257.9MB 4ba79830ebce Downloading [==> ] 9.19MB/166.8MB f836d47fdc4d Extracting [=====================================> ] 80.77MB/107.3MB e73cb4a42719 Downloading [===================================> ] 76.77MB/109.1MB f3b09c502777 Extracting [=========> ] 11.14MB/56.52MB eabd8714fec9 Downloading [========================================> ] 307.1MB/375MB 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB 4ba79830ebce Downloading [======> ] 21.09MB/166.8MB f836d47fdc4d Extracting [======================================> ] 83.56MB/107.3MB e73cb4a42719 Downloading [==========================================> ] 91.91MB/109.1MB eabd8714fec9 Downloading [===========================================> ] 323.3MB/375MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 4ba79830ebce Downloading [=========> ] 32.44MB/166.8MB f836d47fdc4d Extracting [========================================> ] 87.46MB/107.3MB e73cb4a42719 Downloading [=================================================> ] 107.1MB/109.1MB eabd8714fec9 Downloading [============================================> ] 336.8MB/375MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 4ba79830ebce Downloading [============> ] 42.17MB/166.8MB 55f2b468da67 Extracting [===========================> ] 143.2MB/257.9MB f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB f836d47fdc4d Extracting [==========================================> ] 90.8MB/107.3MB eabd8714fec9 Downloading [===============================================> ] 352.5MB/375MB 4ba79830ebce Downloading [================> ] 54.07MB/166.8MB f3b09c502777 Extracting [================> ] 18.94MB/56.52MB 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB f836d47fdc4d Extracting [=============================================> ] 97.48MB/107.3MB d223479d7367 Downloading [> ] 80.82kB/6.742MB eabd8714fec9 Downloading [================================================> ] 361.7MB/375MB 4ba79830ebce Downloading [===================> ] 65.42MB/166.8MB 55f2b468da67 Extracting [============================> ] 148.2MB/257.9MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB f836d47fdc4d Extracting [===============================================> ] 100.8MB/107.3MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete d223479d7367 Downloading [==================> ] 2.538MB/6.742MB 4ba79830ebce Downloading [=====================> ] 71.91MB/166.8MB f3b09c502777 Extracting [====================> ] 23.4MB/56.52MB f836d47fdc4d Extracting [===============================================> ] 102.5MB/107.3MB 55f2b468da67 Extracting [=============================> ] 151MB/257.9MB 2d429b9e73a6 Pull complete d223479d7367 Downloading [===========================================> ] 5.815MB/6.742MB d223479d7367 Verifying Checksum d223479d7367 Download complete eabd8714fec9 Extracting [> ] 557.1kB/375MB 4ba79830ebce Downloading [=========================> ] 85.43MB/166.8MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB 7ce9630189bb Downloading [> ] 326.6kB/31.04MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB f836d47fdc4d Extracting [================================================> ] 103.6MB/107.3MB eabd8714fec9 Extracting [=> ] 10.58MB/375MB 2d7f854c01cf Downloading [==================================================>] 372B/372B 2d7f854c01cf Verifying Checksum 2d7f854c01cf Download complete 4ba79830ebce Downloading [=============================> ] 98.4MB/166.8MB 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB f3b09c502777 Extracting [===========================> ] 30.64MB/56.52MB 7ce9630189bb Downloading [=================> ] 10.6MB/31.04MB eabd8714fec9 Extracting [==> ] 16.71MB/375MB f836d47fdc4d Extracting [================================================> ] 104.7MB/107.3MB 4ba79830ebce Downloading [================================> ] 109.2MB/166.8MB 7ce9630189bb Downloading [================================> ] 20.25MB/31.04MB 4ba79830ebce Downloading [=================================> ] 110.3MB/166.8MB f3b09c502777 Extracting [======================================> ] 43.45MB/56.52MB 8e665a4a2af9 Downloading [> ] 539.6kB/107.2MB eabd8714fec9 Extracting [==> ] 20.05MB/375MB 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB f836d47fdc4d Extracting [=================================================> ] 105.3MB/107.3MB 7ce9630189bb Verifying Checksum 7ce9630189bb Download complete 4ba79830ebce Downloading [=====================================> ] 124.9MB/166.8MB 8e665a4a2af9 Downloading [====> ] 8.65MB/107.2MB f3b09c502777 Extracting [==============================================> ] 52.36MB/56.52MB eabd8714fec9 Extracting [===> ] 23.4MB/375MB 55f2b468da67 Extracting [===============================> ] 163.2MB/257.9MB 4ba79830ebce Downloading [==========================================> ] 140.6MB/166.8MB 8e665a4a2af9 Downloading [=========> ] 21.09MB/107.2MB eabd8714fec9 Extracting [===> ] 26.18MB/375MB f836d47fdc4d Extracting [=================================================> ] 105.8MB/107.3MB 55f2b468da67 Extracting [================================> ] 166MB/257.9MB 4ba79830ebce Downloading [============================================> ] 147.1MB/166.8MB 8e665a4a2af9 Downloading [===========> ] 25.41MB/107.2MB f836d47fdc4d Extracting [=================================================> ] 107MB/107.3MB 219d845251ba Downloading [> ] 539.6kB/108.2MB eabd8714fec9 Extracting [===> ] 28.41MB/375MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [================================> ] 166.6MB/257.9MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 4ba79830ebce Downloading [===============================================> ] 160MB/166.8MB 8e665a4a2af9 Downloading [==================> ] 38.93MB/107.2MB eabd8714fec9 Extracting [====> ] 36.77MB/375MB 219d845251ba Downloading [====> ] 9.19MB/108.2MB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete 8e665a4a2af9 Downloading [=======================> ] 51.36MB/107.2MB eabd8714fec9 Extracting [======> ] 45.68MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 219d845251ba Downloading [==========> ] 22.17MB/108.2MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 8e665a4a2af9 Downloading [==============================> ] 65.42MB/107.2MB eabd8714fec9 Extracting [=======> ] 54.03MB/375MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 219d845251ba Downloading [================> ] 36.22MB/108.2MB 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB 8e665a4a2af9 Downloading [=====================================> ] 80.56MB/107.2MB 219d845251ba Downloading [=======================> ] 50.82MB/108.2MB eabd8714fec9 Extracting [========> ] 61.83MB/375MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 4ba79830ebce Extracting [====> ] 15.04MB/166.8MB 46eab5b44a35 Pull complete eabd8714fec9 Extracting [========> ] 64.06MB/375MB 219d845251ba Downloading [===========================> ] 60.55MB/108.2MB 8e665a4a2af9 Downloading [=============================================> ] 97.86MB/107.2MB 8e665a4a2af9 Verifying Checksum 8e665a4a2af9 Download complete 4ba79830ebce Extracting [====> ] 15.6MB/166.8MB eabd8714fec9 Extracting [========> ] 64.62MB/375MB 219d845251ba Downloading [============================> ] 61.09MB/108.2MB 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 4ba79830ebce Extracting [======> ] 22.84MB/166.8MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 219d845251ba Downloading [================================> ] 70.83MB/108.2MB eabd8714fec9 Extracting [=========> ] 71.3MB/375MB 4ba79830ebce Extracting [=========> ] 30.64MB/166.8MB 219d845251ba Downloading [=======================================> ] 86.51MB/108.2MB eabd8714fec9 Extracting [==========> ] 81.33MB/375MB 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 4ba79830ebce Extracting [============> ] 40.11MB/166.8MB 219d845251ba Downloading [==============================================> ] 100.6MB/108.2MB eabd8714fec9 Extracting [============> ] 91.36MB/375MB 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 219d845251ba Verifying Checksum 219d845251ba Download complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB 4ba79830ebce Extracting [==============> ] 49.02MB/166.8MB eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 55f2b468da67 Extracting [==================================> ] 179.9MB/257.9MB 4ba79830ebce Extracting [==================> ] 60.72MB/166.8MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 55f2b468da67 Extracting [===================================> ] 183.3MB/257.9MB eabd8714fec9 Extracting [==============> ] 106.4MB/375MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 4ba79830ebce Extracting [====================> ] 69.63MB/166.8MB 55f2b468da67 Extracting [====================================> ] 188.3MB/257.9MB eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 4ba79830ebce Extracting [=======================> ] 79.1MB/166.8MB eabd8714fec9 Extracting [===============> ] 113.6MB/375MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB f3b09c502777 Pull complete f836d47fdc4d Pull complete eabd8714fec9 Extracting [===============> ] 118.1MB/375MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 4ba79830ebce Extracting [=========================> ] 86.34MB/166.8MB 4ba79830ebce Extracting [===========================> ] 93.03MB/166.8MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 4ba79830ebce Extracting [=============================> ] 98.04MB/166.8MB eabd8714fec9 Extracting [================> ] 122.6MB/375MB 4ba79830ebce Extracting [=============================> ] 98.6MB/166.8MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB eabd8714fec9 Extracting [================> ] 125.3MB/375MB 4ba79830ebce Extracting [==============================> ] 101.9MB/166.8MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 4ba79830ebce Extracting [================================> ] 107MB/166.8MB 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB eabd8714fec9 Extracting [==================> ] 135.4MB/375MB 4ba79830ebce Extracting [=================================> ] 110.3MB/166.8MB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB eabd8714fec9 Extracting [==================> ] 139.3MB/375MB 4ba79830ebce Extracting [==================================> ] 114.8MB/166.8MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [===================> ] 143.7MB/375MB 4ba79830ebce Extracting [===================================> ] 118.7MB/166.8MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 4ba79830ebce Extracting [====================================> ] 122.6MB/166.8MB eabd8714fec9 Extracting [===================> ] 148.7MB/375MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 4ba79830ebce Extracting [=====================================> ] 125.9MB/166.8MB eabd8714fec9 Extracting [====================> ] 150.4MB/375MB 4ba79830ebce Extracting [======================================> ] 127.6MB/166.8MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 4ba79830ebce Extracting [=======================================> ] 132MB/166.8MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 4ba79830ebce Extracting [========================================> ] 135.9MB/166.8MB 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB eabd8714fec9 Extracting [======================> ] 167.1MB/375MB c4d302cc468d Pull complete 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 4ba79830ebce Extracting [=========================================> ] 139.8MB/166.8MB eabd8714fec9 Extracting [======================> ] 171MB/375MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 8b5292c940e1 Extracting [> ] 1.114MB/63.48MB 408012a7b118 Pull complete 4ba79830ebce Extracting [==========================================> ] 142.6MB/166.8MB 55f2b468da67 Extracting [==========================================> ] 218.9MB/257.9MB eabd8714fec9 Extracting [=======================> ] 179.4MB/375MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 4ba79830ebce Extracting [===========================================> ] 145.4MB/166.8MB eabd8714fec9 Extracting [========================> ] 184.9MB/375MB 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB eabd8714fec9 Extracting [=========================> ] 191.1MB/375MB 4ba79830ebce Extracting [============================================> ] 148.7MB/166.8MB 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB eabd8714fec9 Extracting [==========================> ] 196.6MB/375MB 4ba79830ebce Extracting [=============================================> ] 152.1MB/166.8MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB eabd8714fec9 Extracting [===========================> ] 206.1MB/375MB 4ba79830ebce Extracting [==============================================> ] 156.5MB/166.8MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB eabd8714fec9 Extracting [============================> ] 212.8MB/375MB 4ba79830ebce Extracting [===============================================> ] 158.8MB/166.8MB 44986281b8b9 Pull complete 01e0882c90d9 Pull complete 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 8b5292c940e1 Extracting [===> ] 3.899MB/63.48MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 4ba79830ebce Extracting [===============================================> ] 159.9MB/166.8MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB eabd8714fec9 Extracting [=============================> ] 219.5MB/375MB 4ba79830ebce Extracting [================================================> ] 162.7MB/166.8MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [=============================> ] 223.4MB/375MB 4ba79830ebce Extracting [=================================================> ] 164.9MB/166.8MB 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB eabd8714fec9 Extracting [==============================> ] 227.3MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB 531ee2cf3c0c Extracting [======================> ] 3.637MB/8.066MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 531ee2cf3c0c Extracting [============================> ] 4.522MB/8.066MB eabd8714fec9 Extracting [===============================> ] 234.5MB/375MB 4ba79830ebce Pull complete d223479d7367 Extracting [> ] 98.3kB/6.742MB 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB bf70c5107ab5 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB eabd8714fec9 Extracting [===============================> ] 239MB/375MB d223479d7367 Extracting [==> ] 294.9kB/6.742MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB eabd8714fec9 Extracting [================================> ] 241.8MB/375MB 1ccde423731d Pull complete 8b5292c940e1 Extracting [========> ] 11.14MB/63.48MB d223479d7367 Extracting [============> ] 1.671MB/6.742MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B 55f2b468da67 Extracting [==============================================> ] 240.6MB/257.9MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 8b5292c940e1 Extracting [=========> ] 12.26MB/63.48MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB d223479d7367 Extracting [========================> ] 3.342MB/6.742MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB eabd8714fec9 Extracting [================================> ] 245.1MB/375MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 8b5292c940e1 Extracting [==========> ] 13.93MB/63.48MB d223479d7367 Extracting [=================================> ] 4.522MB/6.742MB eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB d223479d7367 Extracting [========================================> ] 5.505MB/6.742MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 8b5292c940e1 Extracting [============> ] 15.6MB/63.48MB eabd8714fec9 Extracting [=================================> ] 250.1MB/375MB 55f2b468da67 Extracting [================================================> ] 250.7MB/257.9MB d223479d7367 Extracting [=================================================> ] 6.685MB/6.742MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 8b5292c940e1 Extracting [=============> ] 16.71MB/63.48MB 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB eabd8714fec9 Extracting [=================================> ] 253.5MB/375MB 8b5292c940e1 Extracting [==============> ] 17.83MB/63.48MB eabd8714fec9 Extracting [==================================> ] 257.4MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 8b5292c940e1 Extracting [================> ] 20.61MB/63.48MB eabd8714fec9 Extracting [==================================> ] 261.3MB/375MB 8b5292c940e1 Extracting [=================> ] 22.84MB/63.48MB eabd8714fec9 Extracting [===================================> ] 265.7MB/375MB 8b5292c940e1 Extracting [====================> ] 26.18MB/63.48MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 8b5292c940e1 Extracting [=======================> ] 29.52MB/63.48MB 7df673c7455d Pull complete 8b5292c940e1 Extracting [=========================> ] 31.75MB/63.48MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 8b5292c940e1 Extracting [==========================> ] 33.98MB/63.48MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 8b5292c940e1 Extracting [============================> ] 36.77MB/63.48MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 8b5292c940e1 Extracting [===============================> ] 39.55MB/63.48MB eabd8714fec9 Extracting [=====================================> ] 278.5MB/375MB 8b5292c940e1 Extracting [==================================> ] 43.45MB/63.48MB eabd8714fec9 Extracting [=====================================> ] 284.7MB/375MB e27c75a98748 Pull complete 8b5292c940e1 Extracting [====================================> ] 46.24MB/63.48MB eabd8714fec9 Extracting [======================================> ] 288.6MB/375MB 8b5292c940e1 Extracting [======================================> ] 48.46MB/63.48MB eabd8714fec9 Extracting [=======================================> ] 293MB/375MB 8b5292c940e1 Extracting [========================================> ] 51.25MB/63.48MB eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 8b5292c940e1 Extracting [==========================================> ] 54.03MB/63.48MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB d223479d7367 Pull complete eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Pull complete eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 7ce9630189bb Extracting [> ] 327.7kB/31.04MB e73cb4a42719 Extracting [> ] 1.114MB/109.1MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB 7ce9630189bb Extracting [==> ] 1.311MB/31.04MB prometheus Pulled e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 8b5292c940e1 Extracting [=================================================> ] 62.39MB/63.48MB eabd8714fec9 Extracting [========================================> ] 300.8MB/375MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 7ce9630189bb Extracting [=====> ] 3.277MB/31.04MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB e73cb4a42719 Extracting [====> ] 8.913MB/109.1MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 82bfc142787e Extracting [=========================================> ] 7.176MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 7ce9630189bb Extracting [=======> ] 4.588MB/31.04MB e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB e73cb4a42719 Extracting [======> ] 13.93MB/109.1MB 7ce9630189bb Extracting [==========> ] 6.554MB/31.04MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 7ce9630189bb Extracting [===========> ] 7.209MB/31.04MB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB e73cb4a42719 Extracting [=========> ] 20.05MB/109.1MB e73cb4a42719 Extracting [=========> ] 21.17MB/109.1MB 7ce9630189bb Extracting [==============> ] 8.847MB/31.04MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 82bfc142787e Pull complete 8b5292c940e1 Pull complete e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 7ce9630189bb Extracting [===============> ] 9.503MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB 7ce9630189bb Extracting [=================> ] 11.14MB/31.04MB e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 7ce9630189bb Extracting [=================================> ] 20.64MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 7ce9630189bb Extracting [==================================> ] 21.63MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB e73cb4a42719 Extracting [=================> ] 38.44MB/109.1MB e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB 7ce9630189bb Extracting [===================================> ] 22.28MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 7ce9630189bb Extracting [====================================> ] 22.94MB/31.04MB eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB e73cb4a42719 Extracting [======================> ] 50.14MB/109.1MB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [==========================================> ] 317MB/375MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 7ce9630189bb Extracting [========================================> ] 24.9MB/31.04MB eabd8714fec9 Extracting [==========================================> ] 318.1MB/375MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 7ce9630189bb Extracting [============================================> ] 27.53MB/31.04MB e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 454a4350d439 Pull complete e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 7ce9630189bb Extracting [============================================> ] 27.85MB/31.04MB 46baca71a4ef Pull complete 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB e73cb4a42719 Extracting [==========================> ] 57.93MB/109.1MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 7ce9630189bb Extracting [==============================================> ] 28.84MB/31.04MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 7ce9630189bb Extracting [==================================================>] 31.04MB/31.04MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 9a8c18aee5ea Pull complete eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB b0e0ef7895f4 Extracting [==========> ] 7.864MB/37.01MB grafana Pulled 7ce9630189bb Pull complete 2d7f854c01cf Extracting [==================================================>] 372B/372B 2d7f854c01cf Extracting [==================================================>] 372B/372B b0e0ef7895f4 Extracting [=====================> ] 15.73MB/37.01MB e73cb4a42719 Extracting [============================> ] 61.83MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB b0e0ef7895f4 Extracting [====================================> ] 26.74MB/37.01MB e73cb4a42719 Extracting [==============================> ] 66.29MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 2d7f854c01cf Pull complete b0e0ef7895f4 Extracting [==============================================> ] 34.21MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 8e665a4a2af9 Extracting [> ] 557.1kB/107.2MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB e73cb4a42719 Extracting [=================================> ] 73.53MB/109.1MB 8e665a4a2af9 Extracting [=====> ] 12.26MB/107.2MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 5cfb27c10ea5 Extracting [==================================================>] 852B/852B e73cb4a42719 Extracting [==================================> ] 75.76MB/109.1MB 8e665a4a2af9 Extracting [==========> ] 22.84MB/107.2MB eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 8e665a4a2af9 Extracting [================> ] 36.21MB/107.2MB eabd8714fec9 Extracting [=============================================> ] 338.7MB/375MB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 8e665a4a2af9 Extracting [=======================> ] 50.14MB/107.2MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B e73cb4a42719 Extracting [=========================================> ] 89.69MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 8e665a4a2af9 Extracting [============================> ] 61.28MB/107.2MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB e040ea11fa10 Pull complete 8e665a4a2af9 Extracting [===================================> ] 75.2MB/107.2MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 8e665a4a2af9 Extracting [==========================================> ] 90.24MB/107.2MB 09d5a3f70313 Extracting [====> ] 8.913MB/109.2MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 8e665a4a2af9 Extracting [================================================> ] 104.7MB/107.2MB 8e665a4a2af9 Extracting [==================================================>] 107.2MB/107.2MB 8e665a4a2af9 Pull complete 09d5a3f70313 Extracting [======> ] 14.48MB/109.2MB e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 09d5a3f70313 Extracting [==========> ] 23.95MB/109.2MB 219d845251ba Extracting [> ] 557.1kB/108.2MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB 09d5a3f70313 Extracting [===============> ] 33.98MB/109.2MB 219d845251ba Extracting [====> ] 8.913MB/108.2MB e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 09d5a3f70313 Extracting [====================> ] 45.68MB/109.2MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 219d845251ba Extracting [=======> ] 16.71MB/108.2MB e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 09d5a3f70313 Extracting [==========================> ] 56.82MB/109.2MB 219d845251ba Extracting [===========> ] 25.07MB/108.2MB eabd8714fec9 Extracting [==============================================> ] 349.3MB/375MB e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 09d5a3f70313 Extracting [===============================> ] 69.63MB/109.2MB 219d845251ba Extracting [================> ] 35.65MB/108.2MB eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 09d5a3f70313 Extracting [=====================================> ] 81.89MB/109.2MB 219d845251ba Extracting [===================> ] 42.89MB/108.2MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 09d5a3f70313 Extracting [==========================================> ] 91.91MB/109.2MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 219d845251ba Extracting [=======================> ] 51.25MB/108.2MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 09d5a3f70313 Extracting [=============================================> ] 100.3MB/109.2MB eabd8714fec9 Extracting [===============================================> ] 359.9MB/375MB 219d845251ba Extracting [==========================> ] 57.93MB/108.2MB eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 219d845251ba Extracting [===========================> ] 59.05MB/108.2MB 09d5a3f70313 Extracting [===============================================> ] 104.2MB/109.2MB e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 219d845251ba Extracting [===============================> ] 68.52MB/108.2MB 09d5a3f70313 Extracting [================================================> ] 106.4MB/109.2MB eabd8714fec9 Extracting [================================================> ] 366.5MB/375MB a83b68436f09 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 219d845251ba Extracting [======================================> ] 82.44MB/108.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB eabd8714fec9 Extracting [=================================================> ] 370.4MB/375MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 219d845251ba Extracting [===========================================> ] 93.59MB/108.2MB eabd8714fec9 Extracting [=================================================> ] 374.3MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 356f5c2c843b Pull complete 219d845251ba Extracting [=============================================> ] 99.16MB/108.2MB kafka Pulled 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 219d845251ba Extracting [================================================> ] 105.8MB/108.2MB 219d845251ba Extracting [==================================================>] 108.2MB/108.2MB 219d845251ba Pull complete drools-pdp Pulled eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 7e568a0dc8fb Pull complete postgres Pulled 8f10199ed94b Extracting [========================> ] 4.227MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [==================> ] 16.06MB/44.41MB f3a82e9f1761 Extracting [==================================> ] 30.74MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [====> ] 12.26MB/127.4MB da3ed5db7103 Extracting [==========> ] 26.18MB/127.4MB da3ed5db7103 Extracting [===============> ] 38.99MB/127.4MB da3ed5db7103 Extracting [=====================> ] 55.15MB/127.4MB da3ed5db7103 Extracting [===========================> ] 70.75MB/127.4MB da3ed5db7103 Extracting [=================================> ] 85.79MB/127.4MB da3ed5db7103 Extracting [========================================> ] 102.5MB/127.4MB da3ed5db7103 Extracting [==============================================> ] 118.7MB/127.4MB da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container prometheus Created Container grafana Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-drools-pdp Creating Container policy-drools-pdp Created Container postgres Starting Container zookeeper Starting Container prometheus Starting Container zookeeper Started Container kafka Starting Container kafka Started Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-drools-pdp Starting Container policy-drools-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for drools-pdp to start... Checking if REST port 30216 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:1e546779bc02021c9929902d93fb7e174b1fa7fc26a38627f4c9bb7272434276 top - 14:57:57 up 4 min, 0 users, load average: 1.95, 1.48, 0.63 Tasks: 229 total, 1 running, 151 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.4 us, 3.8 sy, 0.0 ni, 77.2 id, 3.5 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.7G 21G 27M 7.7G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS f903a365e727 policy-drools-pdp 0.84% 292.8MiB / 31.41GiB 0.91% 31.9kB / 40.6kB 0B / 8.19kB 54 ba9bf99fd1fe policy-pap 3.24% 550.2MiB / 31.41GiB 1.71% 81.8kB / 124kB 0B / 139MB 67 5db806cbfe64 policy-api 0.11% 423.4MiB / 31.41GiB 1.32% 1.15MB / 1.02MB 0B / 0B 60 fe83b0b880a6 kafka 4.47% 394.7MiB / 31.41GiB 1.23% 153kB / 138kB 0B / 569kB 83 b7b9989737b1 grafana 0.13% 108.6MiB / 31.41GiB 0.34% 19.1MB / 126kB 0B / 30.6MB 19 b6989cad6200 zookeeper 0.07% 87.13MiB / 31.41GiB 0.27% 53.5kB / 45kB 4.1kB / 377kB 62 b3a0ce8f74b5 postgres 0.00% 84.79MiB / 31.41GiB 0.26% 1.64MB / 1.71MB 225kB / 158MB 26 160704f11dc1 prometheus 0.00% 20.87MiB / 31.41GiB 0.06% 88.9kB / 3.37kB 0B / 0B 13 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-13T14:56:10.370622662Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T14:56:10Z grafana | logger=settings t=2025-06-13T14:56:10.370927083Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-13T14:56:10.370938104Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-13T14:56:10.370942354Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:56:10.370945664Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:56:10.370948264Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:56:10.370951105Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:56:10.370954275Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-13T14:56:10.370957475Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:56:10.370962095Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:56:10.370965976Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:56:10.370969296Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:56:10.370972536Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-13T14:56:10.370983477Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-13T14:56:10.370987247Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-13T14:56:10.370990947Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-13T14:56:10.370994257Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-13T14:56:10.370997508Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-13T14:56:10.371000598Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-13T14:56:10.37132866Z level=info msg=FeatureToggles recoveryThreshold=true nestedFolders=true prometheusUsesCombobox=true dataplaneFrontendFallback=true alertingRulePermanentlyDelete=true alertingApiServer=true alertingNotificationsStepMode=true logsInfiniteScrolling=true angularDeprecationUI=true dashboardSceneSolo=true cloudWatchCrossAccountQuerying=true alertingRuleVersionHistoryRestore=true logsPanelControls=true correlations=true newPDFRendering=true alertingUIOptimizeReducer=true logRowsPopoverMenu=true prometheusAzureOverrideAudience=true alertRuleRestore=true onPremToCloudMigrations=true azureMonitorPrometheusExemplars=true formatString=true cloudWatchNewLabelParsing=true azureMonitorEnableUserAuth=true failWrongDSUID=true reportingUseRawTimeRange=true alertingRuleRecoverDeleted=true pinNavItems=true dashboardScene=true panelMonitoring=true kubernetesClientDashboardsFolders=true influxdbBackendMigration=true newDashboardSharingComponent=true groupToNestedTableTransformation=true dashboardSceneForViewers=true lokiLabelNamesQueryApi=true lokiStructuredMetadata=true externalCorePlugins=true promQLScope=true alertingInsights=true addFieldFromCalculationStatFunctions=true ssoSettingsSAML=true newFiltersUI=true annotationPermissionUpdate=true logsContextDatasourceUi=true alertingSimplifiedRouting=true grafanaconThemes=true logsExploreTableVisualisation=true publicDashboardsScene=true tlsMemcached=true pluginsDetailsRightPanel=true kubernetesPlaylists=true transformationsRedesign=true lokiQuerySplitting=true lokiQueryHints=true recordedQueriesMulti=true unifiedRequestLog=true unifiedStorageSearchPermissionFiltering=true cloudWatchRoundUpEndTime=true preinstallAutoUpdate=true alertingQueryAndExpressionsStepMode=true useSessionStorageForRedirection=true awsAsyncQueryCaching=true dashgpt=true ssoSettingsApi=true grafana | logger=sqlstore t=2025-06-13T14:56:10.371383404Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-13T14:56:10.371396805Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-13T14:56:10.372946359Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-13T14:56:10.37295718Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-13T14:56:10.373648476Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-13T14:56:10.374477582Z level=info msg="Migration successfully executed" id="create migration_log table" duration=828.876µs grafana | logger=migrator t=2025-06-13T14:56:10.38082982Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-13T14:56:10.381405549Z level=info msg="Migration successfully executed" id="create user table" duration=575.519µs grafana | logger=migrator t=2025-06-13T14:56:10.384893514Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-13T14:56:10.386269006Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.297368ms grafana | logger=migrator t=2025-06-13T14:56:10.390571966Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-13T14:56:10.392134391Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.562995ms grafana | logger=migrator t=2025-06-13T14:56:10.404784443Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-13T14:56:10.406190998Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.406715ms grafana | logger=migrator t=2025-06-13T14:56:10.410098711Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-13T14:56:10.41096987Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=870.829µs grafana | logger=migrator t=2025-06-13T14:56:10.416214203Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:10.41898414Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.769686ms grafana | logger=migrator t=2025-06-13T14:56:10.422507337Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-13T14:56:10.423532866Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.024699ms grafana | logger=migrator t=2025-06-13T14:56:10.42715525Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-13T14:56:10.428075702Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=919.932µs grafana | logger=migrator t=2025-06-13T14:56:10.433455644Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-13T14:56:10.434357115Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=900.851µs grafana | logger=migrator t=2025-06-13T14:56:10.437921985Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:10.438657865Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=735.019µs grafana | logger=migrator t=2025-06-13T14:56:10.444461885Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-13T14:56:10.445511596Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.048941ms grafana | logger=migrator t=2025-06-13T14:56:10.458475259Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-13T14:56:10.460385318Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.910329ms grafana | logger=migrator t=2025-06-13T14:56:10.464372656Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-13T14:56:10.464710819Z level=info msg="Migration successfully executed" id="Update user table charset" duration=197.223µs grafana | logger=migrator t=2025-06-13T14:56:10.468487453Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-13T14:56:10.469818843Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.33066ms grafana | logger=migrator t=2025-06-13T14:56:10.473240204Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-13T14:56:10.473670453Z level=info msg="Migration successfully executed" id="Add missing user data" duration=432.749µs grafana | logger=migrator t=2025-06-13T14:56:10.480874758Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-13T14:56:10.482181716Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.306668ms grafana | logger=migrator t=2025-06-13T14:56:10.485793369Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-13T14:56:10.487167312Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.368402ms grafana | logger=migrator t=2025-06-13T14:56:10.490875021Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-13T14:56:10.492421275Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.546234ms grafana | logger=migrator t=2025-06-13T14:56:10.495889559Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-13T14:56:10.504627988Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.734558ms grafana | logger=migrator t=2025-06-13T14:56:10.53004894Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-13T14:56:10.53214022Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.092311ms grafana | logger=migrator t=2025-06-13T14:56:10.536633363Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-13T14:56:10.537174259Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=540.386µs grafana | logger=migrator t=2025-06-13T14:56:10.54134567Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-13T14:56:10.542303005Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=956.885µs grafana | logger=migrator t=2025-06-13T14:56:10.549081331Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-13T14:56:10.551089157Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.007555ms grafana | logger=migrator t=2025-06-13T14:56:10.557476457Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-13T14:56:10.558095749Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=625.022µs grafana | logger=migrator t=2025-06-13T14:56:10.562960766Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-13T14:56:10.56391158Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=950.904µs grafana | logger=migrator t=2025-06-13T14:56:10.567254215Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-13T14:56:10.567985925Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=731.879µs grafana | logger=migrator t=2025-06-13T14:56:10.590767889Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-13T14:56:10.591471436Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=706.007µs grafana | logger=migrator t=2025-06-13T14:56:10.594917128Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-13T14:56:10.596320523Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.402085ms grafana | logger=migrator t=2025-06-13T14:56:10.601740108Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-13T14:56:10.602905086Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.164688ms grafana | logger=migrator t=2025-06-13T14:56:10.606401392Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-13T14:56:10.607105119Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=702.957µs grafana | logger=migrator t=2025-06-13T14:56:10.610276243Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-13T14:56:10.611087937Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=811.184µs grafana | logger=migrator t=2025-06-13T14:56:10.616270747Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-13T14:56:10.617145095Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=872.609µs grafana | logger=migrator t=2025-06-13T14:56:10.620530473Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-13T14:56:10.620555995Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=26.112µs grafana | logger=migrator t=2025-06-13T14:56:10.6241871Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-13T14:56:10.625040397Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=854.548µs grafana | logger=migrator t=2025-06-13T14:56:10.628330119Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:56:10.629073989Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=744.05µs grafana | logger=migrator t=2025-06-13T14:56:10.644122652Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-13T14:56:10.645166143Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.04303ms grafana | logger=migrator t=2025-06-13T14:56:10.648595484Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-13T14:56:10.649640804Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.087233ms grafana | logger=migrator t=2025-06-13T14:56:10.65478355Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:10.657999177Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.214387ms grafana | logger=migrator t=2025-06-13T14:56:10.661080424Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-13T14:56:10.661958224Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=877.049µs grafana | logger=migrator t=2025-06-13T14:56:10.664812236Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-13T14:56:10.665553116Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=740.309µs grafana | logger=migrator t=2025-06-13T14:56:10.668386316Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:10.669065812Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=679.426µs grafana | logger=migrator t=2025-06-13T14:56:10.673793241Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-13T14:56:10.674953069Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.159548ms grafana | logger=migrator t=2025-06-13T14:56:10.678131613Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-13T14:56:10.67928233Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.149897ms grafana | logger=migrator t=2025-06-13T14:56:10.682606944Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:10.682950837Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=343.673µs grafana | logger=migrator t=2025-06-13T14:56:10.687467512Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:10.687941703Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=470.262µs grafana | logger=migrator t=2025-06-13T14:56:10.691098256Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-13T14:56:10.691642073Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=543.736µs grafana | logger=migrator t=2025-06-13T14:56:10.694926264Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-13T14:56:10.695970494Z level=info msg="Migration successfully executed" id="create star table" duration=1.04381ms grafana | logger=migrator t=2025-06-13T14:56:10.718159069Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:10.719389671Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.230293ms grafana | logger=migrator t=2025-06-13T14:56:10.724386878Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-13T14:56:10.725817384Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.429806ms grafana | logger=migrator t=2025-06-13T14:56:10.729681785Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-13T14:56:10.731023265Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.341081ms grafana | logger=migrator t=2025-06-13T14:56:10.735132352Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-13T14:56:10.736487303Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.354881ms grafana | logger=migrator t=2025-06-13T14:56:10.739528308Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-13T14:56:10.740279418Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=750.89µs grafana | logger=migrator t=2025-06-13T14:56:10.744747949Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-13T14:56:10.745555814Z level=info msg="Migration successfully executed" id="create org table v1" duration=843.287µs grafana | logger=migrator t=2025-06-13T14:56:10.748644992Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-13T14:56:10.749375661Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=732.84µs grafana | logger=migrator t=2025-06-13T14:56:10.774849017Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-13T14:56:10.775901187Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.052451ms grafana | logger=migrator t=2025-06-13T14:56:10.779716264Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:56:10.78039195Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=675.636µs grafana | logger=migrator t=2025-06-13T14:56:10.785551757Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-13T14:56:10.786288527Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=736.37µs grafana | logger=migrator t=2025-06-13T14:56:10.790507411Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-13T14:56:10.791219319Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=714.568µs grafana | logger=migrator t=2025-06-13T14:56:10.795596514Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-13T14:56:10.795622946Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.912µs grafana | logger=migrator t=2025-06-13T14:56:10.803741332Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-13T14:56:10.803782055Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=41.833µs grafana | logger=migrator t=2025-06-13T14:56:10.809979053Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-13T14:56:10.810289763Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=310.661µs grafana | logger=migrator t=2025-06-13T14:56:10.814235499Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-13T14:56:10.81528441Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.050621ms grafana | logger=migrator t=2025-06-13T14:56:10.818782195Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-13T14:56:10.819832876Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.050211ms grafana | logger=migrator t=2025-06-13T14:56:10.837753893Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-13T14:56:10.839109064Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.354681ms grafana | logger=migrator t=2025-06-13T14:56:10.844882243Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:56:10.846267086Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.383463ms grafana | logger=migrator t=2025-06-13T14:56:10.850358492Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-13T14:56:10.852004843Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.644671ms grafana | logger=migrator t=2025-06-13T14:56:10.856043625Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-13T14:56:10.8568665Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=821.925µs grafana | logger=migrator t=2025-06-13T14:56:10.862476288Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:10.871793186Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.318067ms grafana | logger=migrator t=2025-06-13T14:56:10.876986875Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-13T14:56:10.87764611Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=658.835µs grafana | logger=migrator t=2025-06-13T14:56:10.906139529Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:10.907722075Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.585047ms grafana | logger=migrator t=2025-06-13T14:56:10.911786339Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-13T14:56:10.913115619Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.328339ms grafana | logger=migrator t=2025-06-13T14:56:10.916935926Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:10.917560258Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=624.212µs grafana | logger=migrator t=2025-06-13T14:56:10.923422353Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-13T14:56:10.924961276Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.538833ms grafana | logger=migrator t=2025-06-13T14:56:10.928803195Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:10.928827107Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=24.792µs grafana | logger=migrator t=2025-06-13T14:56:10.932128019Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:56:10.934426214Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.297625ms grafana | logger=migrator t=2025-06-13T14:56:10.939157593Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:56:10.94045927Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.301558ms grafana | logger=migrator t=2025-06-13T14:56:10.943269129Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:56:10.944556166Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.286637ms grafana | logger=migrator t=2025-06-13T14:56:10.950598723Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:56:10.951360744Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=761.811µs grafana | logger=migrator t=2025-06-13T14:56:10.973844489Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:10.976879563Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.034575ms grafana | logger=migrator t=2025-06-13T14:56:10.981240217Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:10.982980064Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.738677ms grafana | logger=migrator t=2025-06-13T14:56:10.986294427Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:10.98708546Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=790.913µs grafana | logger=migrator t=2025-06-13T14:56:10.993882348Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-13T14:56:10.99390987Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=28.642µs grafana | logger=migrator t=2025-06-13T14:56:10.998942309Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-13T14:56:10.998966941Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=25.362µs grafana | logger=migrator t=2025-06-13T14:56:11.001499991Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.005264385Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.767754ms grafana | logger=migrator t=2025-06-13T14:56:11.028330877Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.032721444Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=4.390267ms grafana | logger=migrator t=2025-06-13T14:56:11.036334269Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.038358846Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.023697ms grafana | logger=migrator t=2025-06-13T14:56:11.041154065Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.043110048Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.955362ms grafana | logger=migrator t=2025-06-13T14:56:11.046185486Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.0463988Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=213.274µs grafana | logger=migrator t=2025-06-13T14:56:11.051462063Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:11.052205983Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=743.44µs grafana | logger=migrator t=2025-06-13T14:56:11.055760264Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-13T14:56:11.057680444Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.91937ms grafana | logger=migrator t=2025-06-13T14:56:11.061591979Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-13T14:56:11.061630922Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.213µs grafana | logger=migrator t=2025-06-13T14:56:11.066876067Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-13T14:56:11.067713683Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=836.836µs grafana | logger=migrator t=2025-06-13T14:56:11.070828384Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-13T14:56:11.071988573Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.159489ms grafana | logger=migrator t=2025-06-13T14:56:11.094083599Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:11.103119021Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=9.034961ms grafana | logger=migrator t=2025-06-13T14:56:11.109548056Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-13T14:56:11.110305167Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=757.461µs grafana | logger=migrator t=2025-06-13T14:56:11.115059979Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-13T14:56:11.115840902Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=780.463µs grafana | logger=migrator t=2025-06-13T14:56:11.121370386Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:11.122220014Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=854.558µs grafana | logger=migrator t=2025-06-13T14:56:11.127416776Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:11.128056019Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=639.804µs grafana | logger=migrator t=2025-06-13T14:56:11.150485548Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:11.151179115Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=697.497µs grafana | logger=migrator t=2025-06-13T14:56:11.156331993Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-13T14:56:11.158665201Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.333728ms grafana | logger=migrator t=2025-06-13T14:56:11.162753868Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-13T14:56:11.163458416Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=704.008µs grafana | logger=migrator t=2025-06-13T14:56:11.17060438Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:56:11.171127265Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=522.285µs grafana | logger=migrator t=2025-06-13T14:56:11.178095747Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:56:11.178474313Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=374.865µs grafana | logger=migrator t=2025-06-13T14:56:11.183272148Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-13T14:56:11.184697734Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.425177ms grafana | logger=migrator t=2025-06-13T14:56:11.194141253Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.196257497Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.117784ms grafana | logger=migrator t=2025-06-13T14:56:11.215872265Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.220591004Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=4.716139ms grafana | logger=migrator t=2025-06-13T14:56:11.227185861Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-13T14:56:11.228631099Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.453928ms grafana | logger=migrator t=2025-06-13T14:56:11.232292626Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:11.234256339Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=1.966033ms grafana | logger=migrator t=2025-06-13T14:56:11.237188738Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:11.239118569Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=1.928801ms grafana | logger=migrator t=2025-06-13T14:56:11.264118971Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:11.264815008Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=691.937µs grafana | logger=migrator t=2025-06-13T14:56:11.27222906Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-13T14:56:11.274678856Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.449076ms grafana | logger=migrator t=2025-06-13T14:56:11.279299409Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:56:11.280059511Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=760.051µs grafana | logger=migrator t=2025-06-13T14:56:11.284764109Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-13T14:56:11.286342826Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=1.575347ms grafana | logger=migrator t=2025-06-13T14:56:11.292967145Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-13T14:56:11.293968562Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.001578ms grafana | logger=migrator t=2025-06-13T14:56:11.300320802Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-13T14:56:11.301295218Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=974.176µs grafana | logger=migrator t=2025-06-13T14:56:11.311466347Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-13T14:56:11.312798367Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.33369ms grafana | logger=migrator t=2025-06-13T14:56:11.31903553Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-13T14:56:11.319578346Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=540.907µs grafana | logger=migrator t=2025-06-13T14:56:11.323628991Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:56:11.324137405Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=507.705µs grafana | logger=migrator t=2025-06-13T14:56:11.36054458Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:11.367271685Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.727555ms grafana | logger=migrator t=2025-06-13T14:56:11.400445141Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-13T14:56:11.40160454Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.160469ms grafana | logger=migrator t=2025-06-13T14:56:11.407257323Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:11.408062757Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=805.084µs grafana | logger=migrator t=2025-06-13T14:56:11.411637189Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:11.412586454Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=948.724µs grafana | logger=migrator t=2025-06-13T14:56:11.41888855Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-13T14:56:11.419510032Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=621.102µs grafana | logger=migrator t=2025-06-13T14:56:11.423160739Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-13T14:56:11.428368592Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=5.206013ms grafana | logger=migrator t=2025-06-13T14:56:11.43453474Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-13T14:56:11.436561267Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.028458ms grafana | logger=migrator t=2025-06-13T14:56:11.441678843Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-13T14:56:11.441698845Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=20.372µs grafana | logger=migrator t=2025-06-13T14:56:11.446033128Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-13T14:56:11.446268064Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=234.626µs grafana | logger=migrator t=2025-06-13T14:56:11.450703124Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-13T14:56:11.453622282Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.768147ms grafana | logger=migrator t=2025-06-13T14:56:11.460358068Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-13T14:56:11.460626626Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=271.128µs grafana | logger=migrator t=2025-06-13T14:56:11.46570174Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-13T14:56:11.465835239Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=133.969µs grafana | logger=migrator t=2025-06-13T14:56:11.502899658Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-13T14:56:11.507852034Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.946655ms grafana | logger=migrator t=2025-06-13T14:56:11.51694877Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-13T14:56:11.517308694Z level=info msg="Migration successfully executed" id="Update uid value" duration=359.295µs grafana | logger=migrator t=2025-06-13T14:56:11.522209966Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:11.523731479Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.520693ms grafana | logger=migrator t=2025-06-13T14:56:11.528563676Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-13T14:56:11.52980108Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.236643ms grafana | logger=migrator t=2025-06-13T14:56:11.533171738Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-13T14:56:11.534926137Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=1.753869ms grafana | logger=migrator t=2025-06-13T14:56:11.540059044Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-13T14:56:11.542678902Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.619327ms grafana | logger=migrator t=2025-06-13T14:56:11.546385133Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-13T14:56:11.546403604Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=19.232µs grafana | logger=migrator t=2025-06-13T14:56:11.551152365Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-13T14:56:11.55211415Z level=info msg="Migration successfully executed" id="create api_key table" duration=961.475µs grafana | logger=migrator t=2025-06-13T14:56:11.557349455Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-13T14:56:11.558209593Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=859.548µs grafana | logger=migrator t=2025-06-13T14:56:11.562629132Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-13T14:56:11.563541594Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=911.952µs grafana | logger=migrator t=2025-06-13T14:56:11.567877928Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-13T14:56:11.568735856Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=857.318µs grafana | logger=migrator t=2025-06-13T14:56:11.574388199Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-13T14:56:11.575619572Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.231354ms grafana | logger=migrator t=2025-06-13T14:56:11.580549546Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-13T14:56:11.581919688Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.369042ms grafana | logger=migrator t=2025-06-13T14:56:11.586424393Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:56:11.586999222Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=574.579µs grafana | logger=migrator t=2025-06-13T14:56:11.592150441Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:11.602802852Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.652821ms grafana | logger=migrator t=2025-06-13T14:56:11.606891159Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-13T14:56:11.607512111Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=620.872µs grafana | logger=migrator t=2025-06-13T14:56:11.612458946Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:11.613045116Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=586.15µs grafana | logger=migrator t=2025-06-13T14:56:11.642383562Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-13T14:56:11.644238488Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.854546ms grafana | logger=migrator t=2025-06-13T14:56:11.649100427Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:11.650123606Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.023159ms grafana | logger=migrator t=2025-06-13T14:56:11.655549684Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:11.655891097Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=340.733µs grafana | logger=migrator t=2025-06-13T14:56:11.659354701Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-13T14:56:11.659874666Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=519.765µs grafana | logger=migrator t=2025-06-13T14:56:11.663878138Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-13T14:56:11.663905449Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=28.072µs grafana | logger=migrator t=2025-06-13T14:56:11.66820268Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-13T14:56:11.671501204Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.298154ms grafana | logger=migrator t=2025-06-13T14:56:11.67617147Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-13T14:56:11.678789507Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.617447ms grafana | logger=migrator t=2025-06-13T14:56:11.682251872Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-13T14:56:11.682433214Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=183.023µs grafana | logger=migrator t=2025-06-13T14:56:11.685781731Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-13T14:56:11.688323373Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.541092ms grafana | logger=migrator t=2025-06-13T14:56:11.692628904Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-13T14:56:11.695296975Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.667691ms grafana | logger=migrator t=2025-06-13T14:56:11.698451308Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-13T14:56:11.699353519Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=901.781µs grafana | logger=migrator t=2025-06-13T14:56:11.703561434Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-13T14:56:11.704152564Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=590µs grafana | logger=migrator t=2025-06-13T14:56:11.709794216Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-13T14:56:11.71058749Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=792.524µs grafana | logger=migrator t=2025-06-13T14:56:11.714119559Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-13T14:56:11.714993058Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=872.589µs grafana | logger=migrator t=2025-06-13T14:56:11.719120568Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-13T14:56:11.719959555Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=838.676µs grafana | logger=migrator t=2025-06-13T14:56:11.725406763Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-13T14:56:11.727167173Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.759699ms grafana | logger=migrator t=2025-06-13T14:56:11.732281229Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-13T14:56:11.73230386Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=23.681µs grafana | logger=migrator t=2025-06-13T14:56:11.73687915Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-13T14:56:11.736900112Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=21.972µs grafana | logger=migrator t=2025-06-13T14:56:11.766311983Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-13T14:56:11.771309551Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.996478ms grafana | logger=migrator t=2025-06-13T14:56:11.776225224Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-13T14:56:11.779040065Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.814571ms grafana | logger=migrator t=2025-06-13T14:56:11.78354587Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-13T14:56:11.783565041Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=19.911µs grafana | logger=migrator t=2025-06-13T14:56:11.886067311Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-13T14:56:11.887403172Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.33943ms grafana | logger=migrator t=2025-06-13T14:56:11.973427646Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-13T14:56:11.97496552Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.542675ms grafana | logger=migrator t=2025-06-13T14:56:12.024756644Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-13T14:56:12.02483594Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=83.346µs grafana | logger=migrator t=2025-06-13T14:56:12.032085004Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-13T14:56:12.033422505Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.337161ms grafana | logger=migrator t=2025-06-13T14:56:12.03935844Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-13T14:56:12.040213328Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=855.288µs grafana | logger=migrator t=2025-06-13T14:56:12.044762319Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-13T14:56:12.047173823Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.411275ms grafana | logger=migrator t=2025-06-13T14:56:12.049964943Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-13T14:56:12.049992025Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.392µs grafana | logger=migrator t=2025-06-13T14:56:12.055621429Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-13T14:56:12.05593507Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=311.301µs grafana | logger=migrator t=2025-06-13T14:56:12.06062195Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-13T14:56:12.075640684Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=15.054527ms grafana | logger=migrator t=2025-06-13T14:56:12.079378179Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-13T14:56:12.080106818Z level=info msg="Migration successfully executed" id="create session table" duration=721.619µs grafana | logger=migrator t=2025-06-13T14:56:12.089774168Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-13T14:56:12.089917447Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=145.06µs grafana | logger=migrator t=2025-06-13T14:56:12.093273226Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-13T14:56:12.093361052Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=88.286µs grafana | logger=migrator t=2025-06-13T14:56:12.096688199Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-13T14:56:12.097321352Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=633.023µs grafana | logger=migrator t=2025-06-13T14:56:12.100361959Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-13T14:56:12.100861634Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=499.564µs grafana | logger=migrator t=2025-06-13T14:56:12.105463157Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-13T14:56:12.105491089Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.382µs grafana | logger=migrator t=2025-06-13T14:56:12.108774933Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-13T14:56:12.108836747Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=39.172µs grafana | logger=migrator t=2025-06-13T14:56:12.11225597Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-13T14:56:12.115995025Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.739095ms grafana | logger=migrator t=2025-06-13T14:56:12.139537371Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-13T14:56:12.141995548Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.458988ms grafana | logger=migrator t=2025-06-13T14:56:12.145575602Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-13T14:56:12.145636296Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=60.924µs grafana | logger=migrator t=2025-06-13T14:56:12.149255393Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-13T14:56:12.149339229Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.096µs grafana | logger=migrator t=2025-06-13T14:56:12.152694528Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-13T14:56:12.153588619Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=893.37µs grafana | logger=migrator t=2025-06-13T14:56:12.158623912Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-13T14:56:12.158648694Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.212µs grafana | logger=migrator t=2025-06-13T14:56:12.16182726Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-13T14:56:12.165803601Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.975281ms grafana | logger=migrator t=2025-06-13T14:56:12.168943016Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-13T14:56:12.169111597Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=166.602µs grafana | logger=migrator t=2025-06-13T14:56:12.173330135Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-13T14:56:12.17678408Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.453245ms grafana | logger=migrator t=2025-06-13T14:56:12.18029764Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-13T14:56:12.183439674Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.141234ms grafana | logger=migrator t=2025-06-13T14:56:12.193552163Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:12.193587966Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=39.423µs grafana | logger=migrator t=2025-06-13T14:56:12.196687317Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-13T14:56:12.197444999Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=757.672µs grafana | logger=migrator t=2025-06-13T14:56:12.202773662Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-13T14:56:12.203738298Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=965.186µs grafana | logger=migrator t=2025-06-13T14:56:12.208280468Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-13T14:56:12.209606998Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.325ms grafana | logger=migrator t=2025-06-13T14:56:12.213238916Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-13T14:56:12.214534524Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.295218ms grafana | logger=migrator t=2025-06-13T14:56:12.22064044Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-13T14:56:12.222049617Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.411236ms grafana | logger=migrator t=2025-06-13T14:56:12.227550022Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:12.229325653Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.774551ms grafana | logger=migrator t=2025-06-13T14:56:12.234107769Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-13T14:56:12.235244326Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.135388ms grafana | logger=migrator t=2025-06-13T14:56:12.274560856Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-13T14:56:12.275264924Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=702.788µs grafana | logger=migrator t=2025-06-13T14:56:12.279074734Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-13T14:56:12.279664334Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=589.1µs grafana | logger=migrator t=2025-06-13T14:56:12.284176622Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:12.294233887Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.056266ms grafana | logger=migrator t=2025-06-13T14:56:12.299684279Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-13T14:56:12.300499634Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=814.975µs grafana | logger=migrator t=2025-06-13T14:56:12.304009694Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-13T14:56:12.30498125Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=970.236µs grafana | logger=migrator t=2025-06-13T14:56:12.30849544Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:12.308880676Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=384.746µs grafana | logger=migrator t=2025-06-13T14:56:12.313253404Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-13T14:56:12.313846985Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=592.65µs grafana | logger=migrator t=2025-06-13T14:56:12.317538006Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-13T14:56:12.318366463Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=827.857µs grafana | logger=migrator t=2025-06-13T14:56:12.322850658Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-13T14:56:12.328416788Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.56525ms grafana | logger=migrator t=2025-06-13T14:56:12.333889191Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-13T14:56:12.339180112Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.290201ms grafana | logger=migrator t=2025-06-13T14:56:12.3439881Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-13T14:56:12.348766365Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.779006ms grafana | logger=migrator t=2025-06-13T14:56:12.352001396Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-13T14:56:12.355623953Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.618027ms grafana | logger=migrator t=2025-06-13T14:56:12.358700403Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-13T14:56:12.359615115Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=914.442µs grafana | logger=migrator t=2025-06-13T14:56:12.364487777Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-13T14:56:12.364514109Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.182µs grafana | logger=migrator t=2025-06-13T14:56:12.368227562Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-13T14:56:12.368301437Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=76.295µs grafana | logger=migrator t=2025-06-13T14:56:12.39840902Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-13T14:56:12.400264527Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.854846ms grafana | logger=migrator t=2025-06-13T14:56:12.410116528Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:56:12.411795033Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.677764ms grafana | logger=migrator t=2025-06-13T14:56:12.416750621Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-13T14:56:12.417501612Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=750.171µs grafana | logger=migrator t=2025-06-13T14:56:12.423420846Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-13T14:56:12.424393542Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=971.997µs grafana | logger=migrator t=2025-06-13T14:56:12.442047526Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:56:12.442943457Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=893.23µs grafana | logger=migrator t=2025-06-13T14:56:12.448610863Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-13T14:56:12.452418133Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.80718ms grafana | logger=migrator t=2025-06-13T14:56:12.460186272Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:12.466051032Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.86248ms grafana | logger=migrator t=2025-06-13T14:56:12.471319722Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:12.471503154Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=183.233µs grafana | logger=migrator t=2025-06-13T14:56:12.479248572Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:12.480585393Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.333181ms grafana | logger=migrator t=2025-06-13T14:56:12.485756326Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-13T14:56:12.487152001Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.392024ms grafana | logger=migrator t=2025-06-13T14:56:12.492295662Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:12.49637677Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.084829ms grafana | logger=migrator t=2025-06-13T14:56:12.54697306Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-13T14:56:12.547065636Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=36.903µs grafana | logger=migrator t=2025-06-13T14:56:12.551416363Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-13T14:56:12.552875102Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.458529ms grafana | logger=migrator t=2025-06-13T14:56:12.556935979Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-13T14:56:12.557776416Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=839.197µs grafana | logger=migrator t=2025-06-13T14:56:12.563065027Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-13T14:56:12.563149533Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=80.265µs grafana | logger=migrator t=2025-06-13T14:56:12.567613057Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-13T14:56:12.568788587Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.17859ms grafana | logger=migrator t=2025-06-13T14:56:12.57234791Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-13T14:56:12.573272673Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=924.023µs grafana | logger=migrator t=2025-06-13T14:56:12.578458617Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-13T14:56:12.579300544Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=841.428µs grafana | logger=migrator t=2025-06-13T14:56:12.582592608Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-13T14:56:12.583433806Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=840.178µs grafana | logger=migrator t=2025-06-13T14:56:12.586858299Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-13T14:56:12.588527653Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.668734ms grafana | logger=migrator t=2025-06-13T14:56:12.596091189Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-13T14:56:12.596967729Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=875.939µs grafana | logger=migrator t=2025-06-13T14:56:12.601168225Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-13T14:56:12.601200537Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=32.082µs grafana | logger=migrator t=2025-06-13T14:56:12.605859345Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.612627386Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.763891ms grafana | logger=migrator t=2025-06-13T14:56:12.617016816Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-13T14:56:12.617851793Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=834.477µs grafana | logger=migrator t=2025-06-13T14:56:12.672422013Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.67926017Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.836376ms grafana | logger=migrator t=2025-06-13T14:56:12.68234197Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-13T14:56:12.682810222Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=467.931µs grafana | logger=migrator t=2025-06-13T14:56:12.689063988Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-13T14:56:12.690509467Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.445238ms grafana | logger=migrator t=2025-06-13T14:56:12.693871366Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-13T14:56:12.695116341Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.244134ms grafana | logger=migrator t=2025-06-13T14:56:12.69818263Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-13T14:56:12.710087191Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.901301ms grafana | logger=migrator t=2025-06-13T14:56:12.716473807Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-13T14:56:12.716986452Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=511.825µs grafana | logger=migrator t=2025-06-13T14:56:12.719675315Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-13T14:56:12.720323279Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=647.444µs grafana | logger=migrator t=2025-06-13T14:56:12.723318954Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-13T14:56:12.723512847Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=191.603µs grafana | logger=migrator t=2025-06-13T14:56:12.725647352Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-13T14:56:12.726394713Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=749.051µs grafana | logger=migrator t=2025-06-13T14:56:12.731351911Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-13T14:56:12.731535274Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=184.813µs grafana | logger=migrator t=2025-06-13T14:56:12.733876543Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.740353215Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.475102ms grafana | logger=migrator t=2025-06-13T14:56:12.743772068Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.74673785Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.965012ms grafana | logger=migrator t=2025-06-13T14:56:12.753220842Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.754162557Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=941.064µs grafana | logger=migrator t=2025-06-13T14:56:12.757171112Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.75802924Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=857.728µs grafana | logger=migrator t=2025-06-13T14:56:12.761211507Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-13T14:56:12.761456534Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=244.667µs grafana | logger=migrator t=2025-06-13T14:56:12.787512811Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-13T14:56:12.792013287Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.500807ms grafana | logger=migrator t=2025-06-13T14:56:12.797263695Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-13T14:56:12.798198099Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=934.014µs grafana | logger=migrator t=2025-06-13T14:56:12.803210711Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-13T14:56:12.803620119Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=413.328µs grafana | logger=migrator t=2025-06-13T14:56:12.808731957Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-13T14:56:12.809459557Z level=info msg="Migration successfully executed" id="Move region to single row" duration=724.69µs grafana | logger=migrator t=2025-06-13T14:56:12.815442885Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.816319715Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=877.24µs grafana | logger=migrator t=2025-06-13T14:56:12.821346417Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.822453643Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.107216ms grafana | logger=migrator t=2025-06-13T14:56:12.835575098Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.838178055Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=2.601558ms grafana | logger=migrator t=2025-06-13T14:56:12.847863395Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.849449974Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.587068ms grafana | logger=migrator t=2025-06-13T14:56:12.912827705Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.915001043Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=2.175628ms grafana | logger=migrator t=2025-06-13T14:56:12.920650678Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-13T14:56:12.922496814Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.848346ms grafana | logger=migrator t=2025-06-13T14:56:12.929171979Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-13T14:56:12.929206102Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=35.602µs grafana | logger=migrator t=2025-06-13T14:56:12.936699873Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:56:12.936731385Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=32.763µs grafana | logger=migrator t=2025-06-13T14:56:12.942127963Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:56:12.942146464Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=19.441µs grafana | logger=migrator t=2025-06-13T14:56:12.946139526Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-13T14:56:12.947004535Z level=info msg="Migration successfully executed" id="create test_data table" duration=864.969µs grafana | logger=migrator t=2025-06-13T14:56:12.953096611Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:56:12.954079618Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=982.627µs grafana | logger=migrator t=2025-06-13T14:56:12.959037826Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:12.959936627Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=895.661µs grafana | logger=migrator t=2025-06-13T14:56:12.963438656Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-13T14:56:12.964343017Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=903.911µs grafana | logger=migrator t=2025-06-13T14:56:12.968984334Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-13T14:56:12.96921554Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=231.305µs grafana | logger=migrator t=2025-06-13T14:56:12.973436637Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:56:12.973802562Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=370.435µs grafana | logger=migrator t=2025-06-13T14:56:12.977049024Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:12.977084396Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=36.432µs grafana | logger=migrator t=2025-06-13T14:56:12.980879555Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-13T14:56:12.985952211Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.071586ms grafana | logger=migrator t=2025-06-13T14:56:12.98974773Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-13T14:56:12.990768129Z level=info msg="Migration successfully executed" id="create team table" duration=1.02058ms grafana | logger=migrator t=2025-06-13T14:56:12.995167289Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-13T14:56:12.995823854Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=657.635µs grafana | logger=migrator t=2025-06-13T14:56:13.004222704Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-13T14:56:13.005593976Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.368162ms grafana | logger=migrator t=2025-06-13T14:56:13.025610114Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-13T14:56:13.03489402Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=9.281435ms grafana | logger=migrator t=2025-06-13T14:56:13.041187893Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-13T14:56:13.041519636Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=337.743µs grafana | logger=migrator t=2025-06-13T14:56:13.045481383Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:13.046683834Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.198571ms grafana | logger=migrator t=2025-06-13T14:56:13.053205293Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-13T14:56:13.058317217Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.115645ms grafana | logger=migrator t=2025-06-13T14:56:13.062982041Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-13T14:56:13.067481414Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.499243ms grafana | logger=migrator t=2025-06-13T14:56:13.075270569Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-13T14:56:13.076153088Z level=info msg="Migration successfully executed" id="create team member table" duration=881.619µs grafana | logger=migrator t=2025-06-13T14:56:13.089164095Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-13T14:56:13.090962526Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.798591ms grafana | logger=migrator t=2025-06-13T14:56:13.09562363Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-13T14:56:13.096561193Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=936.683µs grafana | logger=migrator t=2025-06-13T14:56:13.102719438Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-13T14:56:13.10379808Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.075393ms grafana | logger=migrator t=2025-06-13T14:56:13.108743223Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-13T14:56:13.113646463Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.90264ms grafana | logger=migrator t=2025-06-13T14:56:13.168971069Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-13T14:56:13.175083841Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.113152ms grafana | logger=migrator t=2025-06-13T14:56:13.180780825Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-13T14:56:13.185512784Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.731458ms grafana | logger=migrator t=2025-06-13T14:56:13.188769773Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-13T14:56:13.189889198Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.119025ms grafana | logger=migrator t=2025-06-13T14:56:13.193840274Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-13T14:56:13.194727344Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=886.35µs grafana | logger=migrator t=2025-06-13T14:56:13.19897303Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:13.199838408Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=866.918µs grafana | logger=migrator t=2025-06-13T14:56:13.20297936Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-13T14:56:13.2038708Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=888.5µs grafana | logger=migrator t=2025-06-13T14:56:13.207463962Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-13T14:56:13.208343891Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=879.689µs grafana | logger=migrator t=2025-06-13T14:56:13.212495801Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-13T14:56:13.213395901Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=899.95µs grafana | logger=migrator t=2025-06-13T14:56:13.216520822Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-13T14:56:13.217654208Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.133366ms grafana | logger=migrator t=2025-06-13T14:56:13.220731615Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-13T14:56:13.221712422Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=979.496µs grafana | logger=migrator t=2025-06-13T14:56:13.226012381Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-13T14:56:13.22688137Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=868.879µs grafana | logger=migrator t=2025-06-13T14:56:13.23030799Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-13T14:56:13.230756501Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=447.52µs grafana | logger=migrator t=2025-06-13T14:56:13.235325288Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-13T14:56:13.235675592Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=350.054µs grafana | logger=migrator t=2025-06-13T14:56:13.239098282Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-13T14:56:13.240180105Z level=info msg="Migration successfully executed" id="create tag table" duration=1.081423ms grafana | logger=migrator t=2025-06-13T14:56:13.243702693Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-13T14:56:13.244840249Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.137117ms grafana | logger=migrator t=2025-06-13T14:56:13.249928372Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-13T14:56:13.250738766Z level=info msg="Migration successfully executed" id="create login attempt table" duration=807.274µs grafana | logger=migrator t=2025-06-13T14:56:13.254393113Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-13T14:56:13.255360158Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=965.505µs grafana | logger=migrator t=2025-06-13T14:56:13.259563391Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-13T14:56:13.260433069Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=869.878µs grafana | logger=migrator t=2025-06-13T14:56:13.265739537Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:13.281241471Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.493603ms grafana | logger=migrator t=2025-06-13T14:56:13.314704064Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-13T14:56:13.315834021Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.133336ms grafana | logger=migrator t=2025-06-13T14:56:13.319659598Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-13T14:56:13.32161283Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.953371ms grafana | logger=migrator t=2025-06-13T14:56:13.326691472Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:13.327305463Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=613.862µs grafana | logger=migrator t=2025-06-13T14:56:13.330925367Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:13.331630944Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=704.717µs grafana | logger=migrator t=2025-06-13T14:56:13.33513035Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-13T14:56:13.335950245Z level=info msg="Migration successfully executed" id="create user auth table" duration=818.965µs grafana | logger=migrator t=2025-06-13T14:56:13.341935298Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-13T14:56:13.343200874Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.295348ms grafana | logger=migrator t=2025-06-13T14:56:13.34656406Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-13T14:56:13.346583941Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=21.081µs grafana | logger=migrator t=2025-06-13T14:56:13.349486787Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.355702486Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.215228ms grafana | logger=migrator t=2025-06-13T14:56:13.360143975Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.365401229Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.256144ms grafana | logger=migrator t=2025-06-13T14:56:13.368523009Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.374999995Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.476616ms grafana | logger=migrator t=2025-06-13T14:56:13.3780399Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.381827465Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.786885ms grafana | logger=migrator t=2025-06-13T14:56:13.401766708Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.403154231Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.388593ms grafana | logger=migrator t=2025-06-13T14:56:13.406825359Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.415193292Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.368444ms grafana | logger=migrator t=2025-06-13T14:56:13.419635131Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-13T14:56:13.424240982Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.60532ms grafana | logger=migrator t=2025-06-13T14:56:13.43075698Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-13T14:56:13.431520342Z level=info msg="Migration successfully executed" id="create server_lock table" duration=762.352µs grafana | logger=migrator t=2025-06-13T14:56:13.435846953Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-13T14:56:13.436612255Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=765.022µs grafana | logger=migrator t=2025-06-13T14:56:13.439641869Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-13T14:56:13.440355517Z level=info msg="Migration successfully executed" id="create user auth token table" duration=713.018µs grafana | logger=migrator t=2025-06-13T14:56:13.446275866Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-13T14:56:13.448235158Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.962712ms grafana | logger=migrator t=2025-06-13T14:56:13.452604482Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-13T14:56:13.453387405Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=781.992µs grafana | logger=migrator t=2025-06-13T14:56:13.458205139Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-13T14:56:13.459088339Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=882.009µs grafana | logger=migrator t=2025-06-13T14:56:13.464183292Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-13T14:56:13.471616572Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.433141ms grafana | logger=migrator t=2025-06-13T14:56:13.484146246Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-13T14:56:13.485734783Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.587657ms grafana | logger=migrator t=2025-06-13T14:56:13.489584882Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-13T14:56:13.498588769Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.002377ms grafana | logger=migrator t=2025-06-13T14:56:13.510077943Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-13T14:56:13.511133234Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.058472ms grafana | logger=migrator t=2025-06-13T14:56:13.514381562Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-13T14:56:13.515115742Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=736.09µs grafana | logger=migrator t=2025-06-13T14:56:13.519498767Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-13T14:56:13.52014303Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=643.773µs grafana | logger=migrator t=2025-06-13T14:56:13.523653437Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-13T14:56:13.524387776Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=733.889µs grafana | logger=migrator t=2025-06-13T14:56:13.528204633Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:56:13.528276068Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=71.545µs grafana | logger=migrator t=2025-06-13T14:56:13.534172695Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:13.534326796Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=153.711µs grafana | logger=migrator t=2025-06-13T14:56:13.538417121Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:13.539164572Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=746.57µs grafana | logger=migrator t=2025-06-13T14:56:13.549303834Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:13.550090987Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=786.873µs grafana | logger=migrator t=2025-06-13T14:56:13.553937366Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:13.55472557Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=787.763µs grafana | logger=migrator t=2025-06-13T14:56:13.559653531Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:13.559741957Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=88.926µs grafana | logger=migrator t=2025-06-13T14:56:13.56333979Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:13.564115352Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=774.972µs grafana | logger=migrator t=2025-06-13T14:56:13.568400531Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:13.569180343Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=777.893µs grafana | logger=migrator t=2025-06-13T14:56:13.573809615Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:13.574698125Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=887.529µs grafana | logger=migrator t=2025-06-13T14:56:13.578353521Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:13.579297604Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=943.473µs grafana | logger=migrator t=2025-06-13T14:56:13.58397738Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-13T14:56:13.588325092Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.347142ms grafana | logger=migrator t=2025-06-13T14:56:13.693689679Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:13.695596927Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.914919ms grafana | logger=migrator t=2025-06-13T14:56:13.800857416Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:13.801222391Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=368.265µs grafana | logger=migrator t=2025-06-13T14:56:13.92430171Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:13.92609047Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.79182ms grafana | logger=migrator t=2025-06-13T14:56:13.998110801Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-13T14:56:14.000007459Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.900078ms grafana | logger=migrator t=2025-06-13T14:56:14.066242674Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:14.068469153Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.23356ms grafana | logger=migrator t=2025-06-13T14:56:14.076300869Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:14.076332911Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=34.732µs grafana | logger=migrator t=2025-06-13T14:56:14.081596954Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:14.082740381Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.144197ms grafana | logger=migrator t=2025-06-13T14:56:14.08853169Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:14.089898632Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.372822ms grafana | logger=migrator t=2025-06-13T14:56:14.094882976Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-13T14:56:14.096345914Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.457778ms grafana | logger=migrator t=2025-06-13T14:56:14.101414194Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-13T14:56:14.102496517Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.082423ms grafana | logger=migrator t=2025-06-13T14:56:14.108534142Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.118607208Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.068766ms grafana | logger=migrator t=2025-06-13T14:56:14.124170962Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.125217492Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.047551ms grafana | logger=migrator t=2025-06-13T14:56:14.128075734Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.129028518Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=951.963µs grafana | logger=migrator t=2025-06-13T14:56:14.151787445Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.183126318Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=31.334012ms grafana | logger=migrator t=2025-06-13T14:56:14.187893028Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.223546921Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=35.646792ms grafana | logger=migrator t=2025-06-13T14:56:14.229543043Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.230347137Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=804.314µs grafana | logger=migrator t=2025-06-13T14:56:14.234317584Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.235503053Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.18382ms grafana | logger=migrator t=2025-06-13T14:56:14.240938978Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-13T14:56:14.252706598Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=11.767549ms grafana | logger=migrator t=2025-06-13T14:56:14.276247908Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-13T14:56:14.280800803Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.595028ms grafana | logger=migrator t=2025-06-13T14:56:14.300773613Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:14.302487659Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.677173ms grafana | logger=migrator t=2025-06-13T14:56:14.307206315Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:14.308892768Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.685043ms grafana | logger=migrator t=2025-06-13T14:56:14.313670729Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:14.315165379Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.49371ms grafana | logger=migrator t=2025-06-13T14:56:14.320079669Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-13T14:56:14.321701688Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.621389ms grafana | logger=migrator t=2025-06-13T14:56:14.325580008Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:14.325598259Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=18.871µs grafana | logger=migrator t=2025-06-13T14:56:14.329168639Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.335976516Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.806977ms grafana | logger=migrator t=2025-06-13T14:56:14.340515671Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.346597899Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.081229ms grafana | logger=migrator t=2025-06-13T14:56:14.350515802Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.360004328Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.489687ms grafana | logger=migrator t=2025-06-13T14:56:14.363536606Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-13T14:56:14.364237583Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=700.697µs grafana | logger=migrator t=2025-06-13T14:56:14.36970579Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-13T14:56:14.370807854Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.101604ms grafana | logger=migrator t=2025-06-13T14:56:14.389427353Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.400983839Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=11.548895ms grafana | logger=migrator t=2025-06-13T14:56:14.405988324Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.4123334Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.343796ms grafana | logger=migrator t=2025-06-13T14:56:14.425362045Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-13T14:56:14.427577543Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.219719ms grafana | logger=migrator t=2025-06-13T14:56:14.431296053Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:14.437589125Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.291742ms grafana | logger=migrator t=2025-06-13T14:56:14.44094006Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:14.4470511Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.11013ms grafana | logger=migrator t=2025-06-13T14:56:14.451703432Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:14.451720504Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=17.792µs grafana | logger=migrator t=2025-06-13T14:56:14.455194767Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:14.456194924Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=999.547µs grafana | logger=migrator t=2025-06-13T14:56:14.46001604Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:14.462122812Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.105191ms grafana | logger=migrator t=2025-06-13T14:56:14.46776264Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-13T14:56:14.468782279Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.019278ms grafana | logger=migrator t=2025-06-13T14:56:14.472580263Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:14.472600105Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=20.022µs grafana | logger=migrator t=2025-06-13T14:56:14.476903644Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:14.483245109Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.321234ms grafana | logger=migrator t=2025-06-13T14:56:14.487563299Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:14.493956008Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.391739ms grafana | logger=migrator t=2025-06-13T14:56:14.505965934Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:14.514886243Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.920569ms grafana | logger=migrator t=2025-06-13T14:56:14.519379794Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:14.525604952Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.224438ms grafana | logger=migrator t=2025-06-13T14:56:14.529713488Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-13T14:56:14.536072664Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.355936ms grafana | logger=migrator t=2025-06-13T14:56:14.546523136Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:14.546554248Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=32.322µs grafana | logger=migrator t=2025-06-13T14:56:14.551523271Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-13T14:56:14.552808668Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.283876ms grafana | logger=migrator t=2025-06-13T14:56:14.558554883Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.565079111Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.523398ms grafana | logger=migrator t=2025-06-13T14:56:14.570119589Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-13T14:56:14.570137771Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=18.912µs grafana | logger=migrator t=2025-06-13T14:56:14.574318621Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.58264589Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.327889ms grafana | logger=migrator t=2025-06-13T14:56:14.586058219Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-13T14:56:14.587126891Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.068562ms grafana | logger=migrator t=2025-06-13T14:56:14.591772553Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.598202764Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.429402ms grafana | logger=migrator t=2025-06-13T14:56:14.602324951Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-13T14:56:14.603390752Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.064961ms grafana | logger=migrator t=2025-06-13T14:56:14.606988944Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-13T14:56:14.608372827Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.366271ms grafana | logger=migrator t=2025-06-13T14:56:14.628334156Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.637205702Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.875615ms grafana | logger=migrator t=2025-06-13T14:56:14.641018677Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-13T14:56:14.64195397Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=934.403µs grafana | logger=migrator t=2025-06-13T14:56:14.648464577Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-13T14:56:14.649598273Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.132946ms grafana | logger=migrator t=2025-06-13T14:56:14.653048715Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-13T14:56:14.654503202Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.453917ms grafana | logger=migrator t=2025-06-13T14:56:14.669352389Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-13T14:56:14.671034072Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.682063ms grafana | logger=migrator t=2025-06-13T14:56:14.674927983Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-13T14:56:14.674983367Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=56.174µs grafana | logger=migrator t=2025-06-13T14:56:14.680798417Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-13T14:56:14.682506112Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.709434ms grafana | logger=migrator t=2025-06-13T14:56:14.685898479Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.687464524Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.565785ms grafana | logger=migrator t=2025-06-13T14:56:14.692985945Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:56:14.693487469Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:56:14.696677883Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-13T14:56:14.697259192Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=579.239µs grafana | logger=migrator t=2025-06-13T14:56:14.70185558Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:14.703675122Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.819312ms grafana | logger=migrator t=2025-06-13T14:56:14.706964613Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-13T14:56:14.713798822Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.833379ms grafana | logger=migrator t=2025-06-13T14:56:14.719518416Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-13T14:56:14.720593118Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.073943ms grafana | logger=migrator t=2025-06-13T14:56:14.724084902Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-13T14:56:14.726007751Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.922109ms grafana | logger=migrator t=2025-06-13T14:56:14.743625423Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-13T14:56:14.74491561Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.290107ms grafana | logger=migrator t=2025-06-13T14:56:14.748578676Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-13T14:56:14.750227536Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.64804ms grafana | logger=migrator t=2025-06-13T14:56:14.753694859Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:14.754688876Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=993.437µs grafana | logger=migrator t=2025-06-13T14:56:14.757887561Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-13T14:56:14.757911152Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.352µs grafana | logger=migrator t=2025-06-13T14:56:14.7650355Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-13T14:56:14.765064742Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=30.712µs grafana | logger=migrator t=2025-06-13T14:56:14.768391665Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-13T14:56:14.780311345Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=11.92026ms grafana | logger=migrator t=2025-06-13T14:56:14.788840528Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-13T14:56:14.789455929Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=618.992µs grafana | logger=migrator t=2025-06-13T14:56:14.796650482Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-13T14:56:14.798636535Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.984363ms grafana | logger=migrator t=2025-06-13T14:56:14.802302711Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-13T14:56:14.80273687Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=433.319µs grafana | logger=migrator t=2025-06-13T14:56:14.805991439Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-13T14:56:14.806970315Z level=info msg="Migration successfully executed" id="create data_keys table" duration=978.135µs grafana | logger=migrator t=2025-06-13T14:56:14.810116926Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-13T14:56:14.810943001Z level=info msg="Migration successfully executed" id="create secrets table" duration=825.655µs grafana | logger=migrator t=2025-06-13T14:56:14.816749961Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-13T14:56:14.857643425Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=40.886774ms grafana | logger=migrator t=2025-06-13T14:56:14.873974431Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-13T14:56:14.883107384Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.130103ms grafana | logger=migrator t=2025-06-13T14:56:14.887050599Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-13T14:56:14.887209469Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=158.63µs grafana | logger=migrator t=2025-06-13T14:56:14.891676809Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-13T14:56:14.925463647Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.784947ms grafana | logger=migrator t=2025-06-13T14:56:14.9290916Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-13T14:56:14.959914709Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.822008ms grafana | logger=migrator t=2025-06-13T14:56:14.963471447Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-13T14:56:14.964141102Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=669.305µs grafana | logger=migrator t=2025-06-13T14:56:14.968939114Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-13T14:56:14.970170237Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.231223ms grafana | logger=migrator t=2025-06-13T14:56:14.989408088Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-13T14:56:14.989632383Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=225.135µs grafana | logger=migrator t=2025-06-13T14:56:14.994243422Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-13T14:56:14.994896506Z level=info msg="Migration successfully executed" id="create permission table" duration=652.964µs grafana | logger=migrator t=2025-06-13T14:56:14.998296935Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-13T14:56:14.999083947Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=786.553µs grafana | logger=migrator t=2025-06-13T14:56:15.004090883Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-13T14:56:15.004847704Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=756.711µs grafana | logger=migrator t=2025-06-13T14:56:15.018221162Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-13T14:56:15.019772216Z level=info msg="Migration successfully executed" id="create role table" duration=1.528143ms grafana | logger=migrator t=2025-06-13T14:56:15.059512743Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-13T14:56:15.066382664Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.872192ms grafana | logger=migrator t=2025-06-13T14:56:15.079684746Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-13T14:56:15.091581355Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.873587ms grafana | logger=migrator t=2025-06-13T14:56:15.104567646Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-13T14:56:15.105616577Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.048831ms grafana | logger=migrator t=2025-06-13T14:56:15.109413342Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-13T14:56:15.110385847Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=971.726µs grafana | logger=migrator t=2025-06-13T14:56:15.114362964Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:15.115465238Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.100824ms grafana | logger=migrator t=2025-06-13T14:56:15.120785975Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-13T14:56:15.121817884Z level=info msg="Migration successfully executed" id="create team role table" duration=1.032119ms grafana | logger=migrator t=2025-06-13T14:56:15.127416Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:15.129318507Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.900327ms grafana | logger=migrator t=2025-06-13T14:56:15.13472034Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-13T14:56:15.1360663Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.34516ms grafana | logger=migrator t=2025-06-13T14:56:15.140634397Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-13T14:56:15.141993918Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.359161ms grafana | logger=migrator t=2025-06-13T14:56:15.145421138Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-13T14:56:15.146334379Z level=info msg="Migration successfully executed" id="create user role table" duration=912.681µs grafana | logger=migrator t=2025-06-13T14:56:15.151643756Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:15.152816044Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.174989ms grafana | logger=migrator t=2025-06-13T14:56:15.164978541Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-13T14:56:15.166706807Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.727666ms grafana | logger=migrator t=2025-06-13T14:56:15.170611099Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-13T14:56:15.172399729Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.78815ms grafana | logger=migrator t=2025-06-13T14:56:15.177270655Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-13T14:56:15.178214929Z level=info msg="Migration successfully executed" id="create builtin role table" duration=944.054µs grafana | logger=migrator t=2025-06-13T14:56:15.181875775Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-13T14:56:15.1830053Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.132406ms grafana | logger=migrator t=2025-06-13T14:56:15.187639681Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-13T14:56:15.188725094Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.084923ms grafana | logger=migrator t=2025-06-13T14:56:15.193103058Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-13T14:56:15.199029136Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.925508ms grafana | logger=migrator t=2025-06-13T14:56:15.20340857Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:15.204487132Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.078152ms grafana | logger=migrator t=2025-06-13T14:56:15.219540152Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-13T14:56:15.221325162Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.78465ms grafana | logger=migrator t=2025-06-13T14:56:15.225290448Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:15.227035285Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.744777ms grafana | logger=migrator t=2025-06-13T14:56:15.230749164Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-13T14:56:15.232596928Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.847184ms grafana | logger=migrator t=2025-06-13T14:56:15.239590108Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-13T14:56:15.240441895Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=851.117µs grafana | logger=migrator t=2025-06-13T14:56:15.243739706Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-13T14:56:15.244872922Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.132946ms grafana | logger=migrator t=2025-06-13T14:56:15.249270777Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-13T14:56:15.257469878Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.163368ms grafana | logger=migrator t=2025-06-13T14:56:15.263878688Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-13T14:56:15.270904169Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.020911ms grafana | logger=migrator t=2025-06-13T14:56:15.283789934Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-13T14:56:15.297791664Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=13.998049ms grafana | logger=migrator t=2025-06-13T14:56:15.303720482Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-13T14:56:15.30965731Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.936999ms grafana | logger=migrator t=2025-06-13T14:56:15.315668663Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-13T14:56:15.316535142Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=869.468µs grafana | logger=migrator t=2025-06-13T14:56:15.319861325Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-13T14:56:15.321059775Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.19739ms grafana | logger=migrator t=2025-06-13T14:56:15.338353756Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-13T14:56:15.339978165Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.62792ms grafana | logger=migrator t=2025-06-13T14:56:15.343438017Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-13T14:56:15.352491375Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=9.054647ms grafana | logger=migrator t=2025-06-13T14:56:15.358342627Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-13T14:56:15.359502305Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.159318ms grafana | logger=migrator t=2025-06-13T14:56:15.362469004Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-13T14:56:15.363465881Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=996.827µs grafana | logger=migrator t=2025-06-13T14:56:15.366227396Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-13T14:56:15.367004599Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=777.623µs grafana | logger=migrator t=2025-06-13T14:56:15.372158184Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-13T14:56:15.372953238Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=794.634µs grafana | logger=migrator t=2025-06-13T14:56:15.376194725Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:56:15.376214747Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=24.622µs grafana | logger=migrator t=2025-06-13T14:56:15.379015685Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-13T14:56:15.379864302Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=848.206µs grafana | logger=migrator t=2025-06-13T14:56:15.385758297Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-13T14:56:15.385819501Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=62.624µs grafana | logger=migrator t=2025-06-13T14:56:15.389496538Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-13T14:56:15.39026894Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=775.712µs grafana | logger=migrator t=2025-06-13T14:56:15.405229024Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-13T14:56:15.405946462Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=726.159µs grafana | logger=migrator t=2025-06-13T14:56:15.409250494Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-13T14:56:15.410072909Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=821.665µs grafana | logger=migrator t=2025-06-13T14:56:15.413779798Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-13T14:56:15.414073157Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=291.759µs grafana | logger=migrator t=2025-06-13T14:56:15.419250135Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-13T14:56:15.420010456Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=760.201µs grafana | logger=migrator t=2025-06-13T14:56:15.426116256Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-13T14:56:15.427241891Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.129206ms grafana | logger=migrator t=2025-06-13T14:56:15.432048544Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-13T14:56:15.433254375Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.205401ms grafana | logger=migrator t=2025-06-13T14:56:15.436480611Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-13T14:56:15.444667751Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.186809ms grafana | logger=migrator t=2025-06-13T14:56:15.452534579Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-13T14:56:15.45255358Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=22.812µs grafana | logger=migrator t=2025-06-13T14:56:15.458397582Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-13T14:56:15.459572941Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.173339ms grafana | logger=migrator t=2025-06-13T14:56:15.462571622Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-13T14:56:15.463644614Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.072992ms grafana | logger=migrator t=2025-06-13T14:56:15.466829748Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-13T14:56:15.46791082Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.080812ms grafana | logger=migrator t=2025-06-13T14:56:15.470972426Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-13T14:56:15.481549626Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.568729ms grafana | logger=migrator t=2025-06-13T14:56:15.485382323Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.486952878Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.571915ms grafana | logger=migrator t=2025-06-13T14:56:15.491561148Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.492623149Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.062702ms grafana | logger=migrator t=2025-06-13T14:56:15.498019321Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:15.517097371Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=19.0755ms grafana | logger=migrator t=2025-06-13T14:56:15.524448625Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-13T14:56:15.525745222Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.295647ms grafana | logger=migrator t=2025-06-13T14:56:15.530809652Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.532471003Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.661102ms grafana | logger=migrator t=2025-06-13T14:56:15.536506784Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.538319226Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.812271ms grafana | logger=migrator t=2025-06-13T14:56:15.542864211Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:15.544032459Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.167789ms grafana | logger=migrator t=2025-06-13T14:56:15.547735107Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:15.547978824Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=244.327µs grafana | logger=migrator t=2025-06-13T14:56:15.561511892Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:15.563492465Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.981563ms grafana | logger=migrator t=2025-06-13T14:56:15.568291997Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-13T14:56:15.578792082Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.500005ms grafana | logger=migrator t=2025-06-13T14:56:15.581972875Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-13T14:56:15.590315555Z level=info msg="Migration successfully executed" id="add type column" duration=8.34219ms grafana | logger=migrator t=2025-06-13T14:56:15.594928205Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-13T14:56:15.595712297Z level=info msg="Migration successfully executed" id="create entity_events table" duration=779.192µs grafana | logger=migrator t=2025-06-13T14:56:15.600505109Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:56:15.601597972Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.089463ms grafana | logger=migrator t=2025-06-13T14:56:15.6073814Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.607857432Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.611211757Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.611797737Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.615403399Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-13T14:56:15.616227974Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=823.825µs grafana | logger=migrator t=2025-06-13T14:56:15.619528896Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:56:15.620590607Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.060782ms grafana | logger=migrator t=2025-06-13T14:56:15.626418528Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.628197037Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.777859ms grafana | logger=migrator t=2025-06-13T14:56:15.643607742Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:15.646303572Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.695241ms grafana | logger=migrator t=2025-06-13T14:56:15.655662931Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.656786846Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.123416ms grafana | logger=migrator t=2025-06-13T14:56:15.660243448Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.661554656Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.311328ms grafana | logger=migrator t=2025-06-13T14:56:15.6650473Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-13T14:56:15.665962812Z level=info msg="Migration successfully executed" id="Drop public config table" duration=915.062µs grafana | logger=migrator t=2025-06-13T14:56:15.680413262Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-13T14:56:15.68291755Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.503018ms grafana | logger=migrator t=2025-06-13T14:56:15.690543221Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.691614003Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.070522ms grafana | logger=migrator t=2025-06-13T14:56:15.695952074Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:15.69827592Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.322326ms grafana | logger=migrator t=2025-06-13T14:56:15.704330857Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-13T14:56:15.705441131Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.110044ms grafana | logger=migrator t=2025-06-13T14:56:15.712962326Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-13T14:56:15.737381365Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.418709ms grafana | logger=migrator t=2025-06-13T14:56:15.744069714Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-13T14:56:15.751102516Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.030502ms grafana | logger=migrator t=2025-06-13T14:56:15.762900097Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-13T14:56:15.769183529Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.282812ms grafana | logger=migrator t=2025-06-13T14:56:15.773010956Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-13T14:56:15.773231921Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=220.945µs grafana | logger=migrator t=2025-06-13T14:56:15.77887787Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-13T14:56:15.784967148Z level=info msg="Migration successfully executed" id="add share column" duration=6.088839ms grafana | logger=migrator t=2025-06-13T14:56:15.795062796Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-13T14:56:15.795238248Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=176.992µs grafana | logger=migrator t=2025-06-13T14:56:15.799376485Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-13T14:56:15.801216949Z level=info msg="Migration successfully executed" id="create file table" duration=1.839553ms grafana | logger=migrator t=2025-06-13T14:56:15.806494253Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-13T14:56:15.807568185Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.073392ms grafana | logger=migrator t=2025-06-13T14:56:15.813621691Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-13T14:56:15.815907635Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.289543ms grafana | logger=migrator t=2025-06-13T14:56:15.822660798Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-13T14:56:15.823632463Z level=info msg="Migration successfully executed" id="create file_meta table" duration=971.595µs grafana | logger=migrator t=2025-06-13T14:56:15.828434555Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-13T14:56:15.830769022Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.338327ms grafana | logger=migrator t=2025-06-13T14:56:15.834751109Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-13T14:56:15.834770591Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.231µs grafana | logger=migrator t=2025-06-13T14:56:15.840191483Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-13T14:56:15.840209615Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=19.151µs grafana | logger=migrator t=2025-06-13T14:56:15.846259741Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-13T14:56:15.847338293Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.078243ms grafana | logger=migrator t=2025-06-13T14:56:15.851656243Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-13T14:56:15.851998866Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=345.064µs grafana | logger=migrator t=2025-06-13T14:56:15.862406664Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-13T14:56:15.864011702Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.604108ms grafana | logger=migrator t=2025-06-13T14:56:15.880576184Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-13T14:56:15.890338699Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.765766ms grafana | logger=migrator t=2025-06-13T14:56:15.897166197Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-13T14:56:15.897451836Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=284.979µs grafana | logger=migrator t=2025-06-13T14:56:15.907876486Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-13T14:56:15.910383914Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.506098ms grafana | logger=migrator t=2025-06-13T14:56:15.925252752Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-13T14:56:15.926003012Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=750.91µs grafana | logger=migrator t=2025-06-13T14:56:15.930562708Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-13T14:56:15.930939843Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=376.275µs grafana | logger=migrator t=2025-06-13T14:56:15.935028548Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-13T14:56:15.93565003Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=620.891µs grafana | logger=migrator t=2025-06-13T14:56:15.939195178Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:15.94847002Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.273703ms grafana | logger=migrator t=2025-06-13T14:56:15.954140841Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:15.963061579Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.920169ms grafana | logger=migrator t=2025-06-13T14:56:15.966536892Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-13T14:56:15.967664098Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.125926ms grafana | logger=migrator t=2025-06-13T14:56:15.971392318Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-13T14:56:16.046610712Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.216634ms grafana | logger=migrator t=2025-06-13T14:56:16.054135538Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-13T14:56:16.056642467Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.506538ms grafana | logger=migrator t=2025-06-13T14:56:16.061763221Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-13T14:56:16.063750844Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.987243ms grafana | logger=migrator t=2025-06-13T14:56:16.068908621Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-13T14:56:16.094511612Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.602051ms grafana | logger=migrator t=2025-06-13T14:56:16.098698314Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:16.108675635Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.97445ms grafana | logger=migrator t=2025-06-13T14:56:16.116092893Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:16.11649265Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=399.227µs grafana | logger=migrator t=2025-06-13T14:56:16.125836228Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-13T14:56:16.126219624Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=383.896µs grafana | logger=migrator t=2025-06-13T14:56:16.131094392Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-13T14:56:16.131315977Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=221.755µs grafana | logger=migrator t=2025-06-13T14:56:16.135071659Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-13T14:56:16.135410242Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=338.813µs grafana | logger=migrator t=2025-06-13T14:56:16.140239537Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-13T14:56:16.140577859Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=337.002µs grafana | logger=migrator t=2025-06-13T14:56:16.155191462Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-13T14:56:16.156763318Z level=info msg="Migration successfully executed" id="create folder table" duration=1.571235ms grafana | logger=migrator t=2025-06-13T14:56:16.161353326Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-13T14:56:16.162729309Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.380533ms grafana | logger=migrator t=2025-06-13T14:56:16.166339461Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-13T14:56:16.167466187Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.126506ms grafana | logger=migrator t=2025-06-13T14:56:16.17226511Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-13T14:56:16.172294332Z level=info msg="Migration successfully executed" id="Update folder title length" duration=29.102µs grafana | logger=migrator t=2025-06-13T14:56:16.175876543Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:56:16.177045721Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.166079ms grafana | logger=migrator t=2025-06-13T14:56:16.182731983Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:56:16.18386459Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.131796ms grafana | logger=migrator t=2025-06-13T14:56:16.188491971Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-13T14:56:16.189734274Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.244434ms grafana | logger=migrator t=2025-06-13T14:56:16.193300914Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-13T14:56:16.193796597Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=491.813µs grafana | logger=migrator t=2025-06-13T14:56:16.197157593Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-13T14:56:16.197482615Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=324.412µs grafana | logger=migrator t=2025-06-13T14:56:16.203323908Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:16.205398537Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.073699ms grafana | logger=migrator t=2025-06-13T14:56:16.209239996Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:16.210695063Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.454738ms grafana | logger=migrator t=2025-06-13T14:56:16.21436618Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:16.215532409Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.165848ms grafana | logger=migrator t=2025-06-13T14:56:16.220112197Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:56:16.221468828Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.356502ms grafana | logger=migrator t=2025-06-13T14:56:16.224989014Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:16.2261087Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.119996ms grafana | logger=migrator t=2025-06-13T14:56:16.231351302Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:56:16.23251311Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.161358ms grafana | logger=migrator t=2025-06-13T14:56:16.276265492Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-13T14:56:16.278272187Z level=info msg="Migration successfully executed" id="create anon_device table" duration=2.005855ms grafana | logger=migrator t=2025-06-13T14:56:16.282582096Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-13T14:56:16.284662626Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.08013ms grafana | logger=migrator t=2025-06-13T14:56:16.289818003Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-13T14:56:16.29201348Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.191147ms grafana | logger=migrator t=2025-06-13T14:56:16.296104755Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-13T14:56:16.297194579Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.086713ms grafana | logger=migrator t=2025-06-13T14:56:16.302056786Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-13T14:56:16.305504697Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=3.452892ms grafana | logger=migrator t=2025-06-13T14:56:16.310434269Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-13T14:56:16.311765298Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.330739ms grafana | logger=migrator t=2025-06-13T14:56:16.315139875Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-13T14:56:16.315538412Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=398.657µs grafana | logger=migrator t=2025-06-13T14:56:16.318968323Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-13T14:56:16.328389276Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.420884ms grafana | logger=migrator t=2025-06-13T14:56:16.332713837Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-13T14:56:16.33351265Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=799.683µs grafana | logger=migrator t=2025-06-13T14:56:16.339225214Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:56:16.339245916Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=21.402µs grafana | logger=migrator t=2025-06-13T14:56:16.342895871Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:56:16.344761437Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.865475ms grafana | logger=migrator t=2025-06-13T14:56:16.348548091Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:56:16.348566182Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.671µs grafana | logger=migrator t=2025-06-13T14:56:16.353213065Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:56:16.354522903Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.309278ms grafana | logger=migrator t=2025-06-13T14:56:16.360116789Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:56:16.364476242Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=4.359763ms grafana | logger=migrator t=2025-06-13T14:56:16.368729898Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:56:16.36995464Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.224422ms grafana | logger=migrator t=2025-06-13T14:56:16.392475534Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-13T14:56:16.394823912Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.348528ms grafana | logger=migrator t=2025-06-13T14:56:16.399009834Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-13T14:56:16.399867621Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=860.948µs grafana | logger=migrator t=2025-06-13T14:56:16.40564073Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-13T14:56:16.405994693Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=354.514µs grafana | logger=migrator t=2025-06-13T14:56:16.411775832Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-13T14:56:16.412705744Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=926.562µs grafana | logger=migrator t=2025-06-13T14:56:16.41650816Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-13T14:56:16.417607574Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.099224ms grafana | logger=migrator t=2025-06-13T14:56:16.421063496Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-13T14:56:16.422084945Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.020829ms grafana | logger=migrator t=2025-06-13T14:56:16.426893498Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-13T14:56:16.436752771Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.858543ms grafana | logger=migrator t=2025-06-13T14:56:16.443188934Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-13T14:56:16.450312653Z level=info msg="Migration successfully executed" id="add region_slug column" duration=7.122599ms grafana | logger=migrator t=2025-06-13T14:56:16.453870672Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-13T14:56:16.463838192Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.9669ms grafana | logger=migrator t=2025-06-13T14:56:16.469298549Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-13T14:56:16.478628416Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.329147ms grafana | logger=migrator t=2025-06-13T14:56:16.483416858Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-13T14:56:16.483566548Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=148.69µs grafana | logger=migrator t=2025-06-13T14:56:16.486933555Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-13T14:56:16.487847996Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=913.821µs grafana | logger=migrator t=2025-06-13T14:56:16.493191796Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-13T14:56:16.502593228Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.400972ms grafana | logger=migrator t=2025-06-13T14:56:16.512534016Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-13T14:56:16.512878079Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=343.423µs grafana | logger=migrator t=2025-06-13T14:56:16.516445769Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-13T14:56:16.517774728Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.329849ms grafana | logger=migrator t=2025-06-13T14:56:16.52107373Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:16.548462871Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=27.388671ms grafana | logger=migrator t=2025-06-13T14:56:16.553434426Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-13T14:56:16.554564972Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.132136ms grafana | logger=migrator t=2025-06-13T14:56:16.558229498Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:16.559671345Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.441657ms grafana | logger=migrator t=2025-06-13T14:56:16.563191982Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:16.563589588Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=393.957µs grafana | logger=migrator t=2025-06-13T14:56:16.568530601Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:16.569484015Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=952.504µs grafana | logger=migrator t=2025-06-13T14:56:16.572819879Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:16.599097786Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.276116ms grafana | logger=migrator t=2025-06-13T14:56:16.602597551Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-13T14:56:16.603392694Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=796.603µs grafana | logger=migrator t=2025-06-13T14:56:16.607987213Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:16.609246128Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.258315ms grafana | logger=migrator t=2025-06-13T14:56:16.612660317Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:16.613156381Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=494.863µs grafana | logger=migrator t=2025-06-13T14:56:16.632912109Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:16.635066504Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=2.153904ms grafana | logger=migrator t=2025-06-13T14:56:16.64110369Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-13T14:56:16.651761256Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.657577ms grafana | logger=migrator t=2025-06-13T14:56:16.655631826Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-13T14:56:16.663146481Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.510305ms grafana | logger=migrator t=2025-06-13T14:56:16.668322299Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-13T14:56:16.676772468Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=8.447518ms grafana | logger=migrator t=2025-06-13T14:56:16.680068919Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-13T14:56:16.687936618Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=7.863169ms grafana | logger=migrator t=2025-06-13T14:56:16.691304765Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-13T14:56:16.700667474Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.36114ms grafana | logger=migrator t=2025-06-13T14:56:16.705592105Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-13T14:56:16.71592945Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.331805ms grafana | logger=migrator t=2025-06-13T14:56:16.720885323Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-13T14:56:16.721768113Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=881.909µs grafana | logger=migrator t=2025-06-13T14:56:16.72529462Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-13T14:56:16.761517915Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.216384ms grafana | logger=migrator t=2025-06-13T14:56:16.766780899Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-13T14:56:16.773830313Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.049394ms grafana | logger=migrator t=2025-06-13T14:56:16.778790856Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-13T14:56:16.786669606Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.87766ms grafana | logger=migrator t=2025-06-13T14:56:16.790377095Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-13T14:56:16.799168406Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=8.791511ms grafana | logger=migrator t=2025-06-13T14:56:16.802701004Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-13T14:56:16.811845988Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.148105ms grafana | logger=migrator t=2025-06-13T14:56:16.816724096Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-13T14:56:16.816745178Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=22.112µs grafana | logger=migrator t=2025-06-13T14:56:16.820420415Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-13T14:56:16.820438176Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=18.061µs grafana | logger=migrator t=2025-06-13T14:56:16.824226581Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:16.837793723Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.564302ms grafana | logger=migrator t=2025-06-13T14:56:16.844165381Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:16.853676911Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.51042ms grafana | logger=migrator t=2025-06-13T14:56:16.858251778Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-13T14:56:16.858591091Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=338.693µs grafana | logger=migrator t=2025-06-13T14:56:16.864720173Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-13T14:56:16.865092098Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=402.357µs grafana | logger=migrator t=2025-06-13T14:56:16.898528376Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:16.91301227Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=14.486134ms grafana | logger=migrator t=2025-06-13T14:56:16.917507482Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:16.931833395Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=14.330013ms grafana | logger=migrator t=2025-06-13T14:56:16.935990985Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:16.947065279Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=11.074544ms grafana | logger=migrator t=2025-06-13T14:56:16.952385217Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:16.960090545Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.707508ms grafana | logger=migrator t=2025-06-13T14:56:16.963796284Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-13T14:56:16.964341741Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=545.227µs grafana | logger=migrator t=2025-06-13T14:56:16.969118572Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:16.978997356Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.878224ms grafana | logger=migrator t=2025-06-13T14:56:16.985489642Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:16.995797865Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=10.306833ms grafana | logger=migrator t=2025-06-13T14:56:17.026691009Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-13T14:56:17.027110058Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=418.559µs grafana | logger=migrator t=2025-06-13T14:56:17.036791901Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-13T14:56:17.037715403Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=922.882µs grafana | logger=migrator t=2025-06-13T14:56:17.043379735Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-13T14:56:17.045213559Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.832714ms grafana | logger=migrator t=2025-06-13T14:56:17.054167763Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:56:17.054217887Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=54.223µs grafana | logger=migrator t=2025-06-13T14:56:17.064471948Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:56:17.06449912Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=28.482µs grafana | logger=migrator t=2025-06-13T14:56:17.068874846Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-13T14:56:17.069472136Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=597.551µs grafana | logger=migrator t=2025-06-13T14:56:17.074836618Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:17.087266196Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.427198ms grafana | logger=migrator t=2025-06-13T14:56:17.093415271Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:17.103193851Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.7782ms grafana | logger=migrator t=2025-06-13T14:56:17.121936485Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-13T14:56:17.123614369Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.677473ms grafana | logger=migrator t=2025-06-13T14:56:17.137692609Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-13T14:56:17.139706244Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.017426ms grafana | logger=migrator t=2025-06-13T14:56:17.15342183Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:17.168951537Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=15.558859ms grafana | logger=migrator t=2025-06-13T14:56:17.176315504Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:17.186749358Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=10.433344ms grafana | logger=migrator t=2025-06-13T14:56:17.195771287Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:17.195799289Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-13T14:56:17.196270531Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-13T14:56:17.196289292Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=519.255µs grafana | logger=migrator t=2025-06-13T14:56:17.204339525Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-13T14:56:17.205396866Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=1.056521ms grafana | logger=migrator t=2025-06-13T14:56:17.210422835Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:17.212350566Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.93102ms grafana | logger=migrator t=2025-06-13T14:56:17.218017428Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:56:17.21952938Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.514652ms grafana | logger=migrator t=2025-06-13T14:56:17.223846191Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:56:17.224966157Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.119826ms grafana | logger=migrator t=2025-06-13T14:56:17.242704223Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-13T14:56:17.244267999Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.565566ms grafana | logger=migrator t=2025-06-13T14:56:17.268506104Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:17.28088865Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=12.384715ms grafana | logger=migrator t=2025-06-13T14:56:17.28504736Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:17.293853994Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=8.805344ms grafana | logger=migrator t=2025-06-13T14:56:17.298148424Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:17.308558887Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=10.409672ms grafana | logger=migrator t=2025-06-13T14:56:17.311929594Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:17.319010612Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=7.077307ms grafana | logger=migrator t=2025-06-13T14:56:17.323639514Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-13T14:56:17.323910022Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-13T14:56:17.323929514Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=291.329µs grafana | logger=migrator t=2025-06-13T14:56:17.327367465Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-13T14:56:17.328628421Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.260785ms grafana | logger=migrator t=2025-06-13T14:56:17.332549455Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.958934641s grafana | logger=migrator t=2025-06-13T14:56:17.333557543Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-13T14:56:17.354130611Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-13T14:56:17.354339865Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-13T14:56:17.368556964Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:17.448415232Z level=info msg="Restored cache from database" duration=451.72µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.456747354Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-13T14:56:17.456789797Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-13T14:56:17.464273382Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-13T14:56:17.465069196Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=795.294µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.476818449Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-13T14:56:17.476856051Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=39.793µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.485413909Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-13T14:56:17.485498564Z level=info msg="Migration successfully executed" id="drop table resource" duration=84.916µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.499972791Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-13T14:56:17.501864498Z level=info msg="Migration successfully executed" id="create table resource" duration=1.891537ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.509585759Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:17.511494398Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.908399ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.519092181Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.5192248Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=133.539µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.524379947Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.526090743Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.710696ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.530702284Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:17.532030994Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.32833ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.538905427Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:56:17.540691978Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.786461ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.551663128Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:56:17.551746364Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=83.426µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.561525024Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:56:17.562847833Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.32248ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.568975696Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:17.57081649Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.840464ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.588803424Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:56:17.589239913Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=438.929µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.596813324Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:56:17.598911996Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.097702ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.602727563Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:17.604131918Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.405515ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.609300837Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:56:17.610772916Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.472509ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.627515026Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.642626025Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=15.111609ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.646122751Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-13T14:56:17.653609156Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=7.485115ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.656984894Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-13T14:56:17.658243259Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.257555ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.662725051Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-13T14:56:17.663933813Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.207942ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.66730558Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.677796508Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.490248ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.683225364Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-13T14:56:17.696440806Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=13.215632ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.705960978Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-13T14:56:17.706008302Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-13T14:56:17.706887061Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=926.983µs grafana | logger=resource-migrator t=2025-06-13T14:56:17.711212963Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-13T14:56:17.712768908Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.555195ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.716456537Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.728179567Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.71048ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.740247382Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-13T14:56:17.742622742Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.36491ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.747120895Z level=info msg="migrations completed" performed=26 skipped=0 duration=282.879826ms grafana | logger=resource-migrator t=2025-06-13T14:56:17.748255492Z level=info msg="Unlocking database" grafana | t=2025-06-13T14:56:17.748689531Z level=info caller=logger.go:214 time=2025-06-13T14:56:17.748661589Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-13T14:56:17.761472454Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-13T14:56:17.79995624Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-13T14:56:17.799989152Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-13T14:56:17.800059657Z level=info msg="Plugins loaded" count=53 duration=38.587843ms grafana | logger=query_data t=2025-06-13T14:56:17.804982749Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-13T14:56:17.812980479Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T14:56:17.82856498Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-13T14:56:17.836988919Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-13T14:56:17.83701158Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-13T14:56:17.840463173Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-13T14:56:17.841402756Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2025-06-13T14:56:17.841687376Z level=info msg="Warming state cache for startup" grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:17.852319193Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T14:56:17.853046542Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-13T14:56:17.855291883Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2025-06-13T14:56:17.931772173Z level=info msg="Update check succeeded" duration=90.830538ms grafana | logger=grafana.update.checker t=2025-06-13T14:56:17.935543908Z level=info msg="Update check succeeded" duration=93.590695ms grafana | logger=ngalert.state.manager t=2025-06-13T14:56:17.987405237Z level=info msg="State cache has been initialized" states=0 duration=145.718002ms grafana | logger=ngalert.scheduler t=2025-06-13T14:56:17.98744954Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-13T14:56:17.987513454Z level=info msg=starting first_tick=2025-06-13T14:56:20Z grafana | logger=provisioning.datasources t=2025-06-13T14:56:17.994944736Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-13T14:56:18.017011844Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-13T14:56:18.017044737Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-13T14:56:18.018633154Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:18.15020084Z level=info msg="Patterns update finished" duration=102.349925ms grafana | logger=plugin.installer t=2025-06-13T14:56:18.256359793Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-13T14:56:18.324999824Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.338852448Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.340075471Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.341215968Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.352787538Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.354459541Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.3556325Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.357269821Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.361592993Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:18.363975573Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=plugins.registration t=2025-06-13T14:56:18.369008183Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:18.369050716Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=516.527009ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:18.369158693Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=app-registry t=2025-06-13T14:56:18.429716049Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-13T14:56:18.647970714Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-13T14:56:18.827772175Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-13T14:56:18.854404561Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:18.854434984Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=485.260479ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:18.854467446Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=provisioning.dashboard t=2025-06-13T14:56:18.912129256Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-13T14:56:19.033331893Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-13T14:56:19.087259132Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-13T14:56:19.102996263Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:19.103017395Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=248.544179ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:19.103046277Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-13T14:56:19.284240962Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-13T14:56:19.349908852Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-13T14:56:19.366301318Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:19.36632926Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=263.275323ms grafana | logger=infra.usagestats t=2025-06-13T14:56:52.849026693Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-13 14:56:12,462] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,463] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,466] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@221af3c0 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,469] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:56:12,474] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:56:12,482] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:12,500] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:12,501] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:12,512] INFO Socket connection established, initiating session, client: /172.17.0.5:34864, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:12,554] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000250730000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:12,672] INFO Session: 0x100000250730000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:12,672] INFO EventThread shut down for session: 0x100000250730000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-13 14:56:13,441] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-13 14:56:13,747] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:56:13,833] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-13 14:56:13,834] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:13,834] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:13,848] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:13,852] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,852] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,853] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,854] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:13,858] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:56:13,864] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:13,866] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:13,869] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:13,878] INFO Socket connection established, initiating session, client: /172.17.0.5:34866, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:13,902] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000250730001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:13,908] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:14,392] INFO Cluster ID = qXUltZKbTyOIemKjVEFwng (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:14,397] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-13 14:56:14,463] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-13 14:56:14,498] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:14,498] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:14,498] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:14,500] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:14,533] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-13 14:56:14,535] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-13 14:56:14,548] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:14,548] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:14,551] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:14,561] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-13 14:56:14,607] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-13 14:56:14,622] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-13 14:56:14,642] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:56:14,688] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:15,060] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:56:15,063] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:56:15,085] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-13 14:56:15,086] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:56:15,086] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:56:15,090] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-13 14:56:15,095] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:15,112] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,116] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,120] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,117] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,131] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-13 14:56:15,151] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:15,183] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749826575168,1749826575168,1,0,0,72057603977576449,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:15,184] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:15,251] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-13 14:56:15,258] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,262] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,265] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,273] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:15,280] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:15,287] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,290] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:15,292] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,297] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:56:15,313] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:56:15,320] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:56:15,320] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-13 14:56:15,341] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-13 14:56:15,342] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,359] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,365] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:15,367] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,370] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,392] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,397] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,401] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-13 14:56:15,403] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-13 14:56:15,415] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-13 14:56:15,418] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,418] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,418] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,419] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,422] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,422] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,422] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,423] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-13 14:56:15,423] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,425] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:56:15,438] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-13 14:56:15,440] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:15,440] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:15,440] INFO Kafka startTimeMs: 1749826575429 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:15,442] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:15,445] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:15,446] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:15,451] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:15,451] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:15,452] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:15,453] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:15,456] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:15,456] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,461] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:56:15,469] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,469] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,470] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,470] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,471] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,488] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:15,539] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:56:15,601] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:15,602] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:20,490] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:20,490] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:49,800] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:56:49,807] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:56:49,823] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:49,834] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:49,865] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(lfgFLSCOT9CmZ1nij293hg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(BPn16ne6TE6Dlb6_NbKVFg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:49,867] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:49,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:56:49,876] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,884] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,885] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,886] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,887] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,888] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,889] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,890] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:56:49,890] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:56:50,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:56:50,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:56:50,058] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:56:50,059] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 14:56:50,061] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:56:50,064] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,071] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,073] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,074] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,075] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,076] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:56:50,116] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:56:50,116] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:56:50,117] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:56:50,118] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:56:50,119] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 14:56:50,119] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,185] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,202] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,204] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,205] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,206] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,228] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,229] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,230] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,230] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,230] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,248] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,254] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,254] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,254] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,254] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,266] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,267] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,267] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,267] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,267] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,274] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,275] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,275] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,275] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,275] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,281] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,282] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,282] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,282] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,282] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,289] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,290] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,290] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,290] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,291] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,301] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,302] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,302] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,302] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,302] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,308] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,308] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,308] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,308] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,308] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,316] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,316] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,316] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,316] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,317] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,325] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,326] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,326] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,326] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,326] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,332] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,333] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,333] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,333] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,333] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,341] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,342] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,342] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,342] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,342] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,350] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,351] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,351] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,351] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,352] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,358] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,359] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,359] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,359] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,359] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,376] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,377] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,378] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,378] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,378] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,386] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,387] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,387] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,387] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,387] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,394] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,395] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,395] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,395] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,395] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,407] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,408] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,408] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,409] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,409] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,416] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,416] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,416] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,417] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,417] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,423] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,424] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,424] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,424] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,424] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,433] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,434] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,434] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,434] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,434] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,440] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,441] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,441] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,441] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,441] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,447] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,454] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,454] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,455] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,455] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,464] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,465] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,465] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,465] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,465] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,473] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,474] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,474] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,474] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,474] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,482] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,482] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,482] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,483] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,483] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,506] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,508] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,508] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,508] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,508] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,515] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,515] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,516] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,516] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,516] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,523] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,524] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,525] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,525] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,525] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,532] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,532] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,532] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,532] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,532] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,540] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,541] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,541] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,541] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,541] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,550] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,551] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,551] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,551] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,551] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,561] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,562] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,562] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,563] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,563] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,577] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,579] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,579] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,579] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,579] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(lfgFLSCOT9CmZ1nij293hg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,586] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,587] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,587] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,587] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,587] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,596] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,597] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,597] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,597] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,597] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,604] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,605] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,605] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,605] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,605] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,612] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,613] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,613] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,613] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,613] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,629] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,631] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,631] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,631] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,631] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,639] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,640] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,640] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,640] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,640] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,645] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,646] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,646] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,646] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,646] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,653] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,654] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,654] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,654] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,654] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,660] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,661] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,661] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,661] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,661] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,667] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,668] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,668] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,668] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,668] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,675] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,676] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,676] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,676] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,676] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,682] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,684] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,684] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,684] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,684] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,693] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,694] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,694] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,694] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,694] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,700] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,701] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,701] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,701] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,701] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,707] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,707] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,707] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,707] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,707] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,713] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:56:50,713] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:56:50,713] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,713] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:56:50,714] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(BPn16ne6TE6Dlb6_NbKVFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:56:50,719] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:56:50,720] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:56:50,726] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,728] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:50,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,734] INFO [Broker id=1] Finished LeaderAndIsr request in 665ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 14:56:50,738] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=BPn16ne6TE6Dlb6_NbKVFg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=lfgFLSCOT9CmZ1nij293hg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:56:50,739] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,740] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,741] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,742] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,743] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,744] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,745] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,746] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,747] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,747] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,748] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,749] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,750] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:56:50,751] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:56:51,358] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:51,372] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:51,504] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a in Empty state. Created a new member id consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:51,507] INFO [GroupCoordinator 1]: Preparing to rebalance group acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960 with group instance id None; client reason: need to re-join with the given member-id: consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:53,302] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 7d912964-8778-42e7-b0af-b72511d03f65 in Empty state. Created a new member id consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:53,305] INFO [GroupCoordinator 1]: Preparing to rebalance group 7d912964-8778-42e7-b0af-b72511d03f65 in state PreparingRebalance with old generation 0 (__consumer_offsets-17) (reason: Adding new member consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f with group instance id None; client reason: need to re-join with the given member-id: consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:54,385] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:54,412] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:54,508] INFO [GroupCoordinator 1]: Stabilized group acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:54,512] INFO [GroupCoordinator 1]: Assignment received from leader consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960 for group acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:56,306] INFO [GroupCoordinator 1]: Stabilized group 7d912964-8778-42e7-b0af-b72511d03f65 generation 1 (__consumer_offsets-17) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:56,324] INFO [GroupCoordinator 1]: Assignment received from leader consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f for group 7d912964-8778-42e7-b0af-b72511d03f65 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-13T14:56:28.187+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-13T14:56:28.265+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 38 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-13T14:56:28.267+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-13T14:56:29.790+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-13T14:56:29.975+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 174 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-13T14:56:30.670+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-13T14:56:30.684+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:56:30.686+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-13T14:56:30.686+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-13T14:56:30.723+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-13T14:56:30.724+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2385 ms policy-api | [2025-06-13T14:56:31.068+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-13T14:56:31.161+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-13T14:56:31.211+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-13T14:56:31.619+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-13T14:56:31.655+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-13T14:56:31.865+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@64e89bb2 policy-api | [2025-06-13T14:56:31.867+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-13T14:56:31.946+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-13T14:56:34.013+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-13T14:56:34.017+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-13T14:56:34.712+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-13T14:56:35.554+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-13T14:56:36.703+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-13T14:56:36.753+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-13T14:56:37.391+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-13T14:56:37.530+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:56:37.550+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-13T14:56:37.572+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.083 seconds (process running for 10.679) policy-api | [2025-06-13T14:56:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-13T14:56:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-13T14:56:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-csit | Invoking the robot tests from: drools-pdp-test.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Drools-Pdp-Test policy-csit | ============================================================================== policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Drools-Pdp-Test | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.440797 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.492741 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.545059 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.595557 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.65367 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.707807 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.759761 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.8103 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.863218 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.919039 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:15.969415 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.022761 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.076077 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.120898 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.175984 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.223475 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.297404 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.342957 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.400883 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.454849 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.505479 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.557173 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.607622 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.658901 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.700479 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.745433 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.802728 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.855789 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.915318 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:16.963563 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.015449 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.074302 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.126604 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.183911 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.249803 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.317274 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.390667 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.451538 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.513211 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.576356 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.648137 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.703836 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.76452 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.818172 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.888661 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.932079 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:17.988926 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.037651 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.087328 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.138696 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.191437 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.241571 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.290951 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.345038 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.395057 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.446864 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.50284 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.562301 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.61405 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.674501 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.726497 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.774149 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.82203 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.870433 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.92511 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:18.986572 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.032727 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.091089 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.144845 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.206236 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.266869 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.325855 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.371188 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.424429 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.47916 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.530726 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.582255 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.633213 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.68508 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.735109 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.783763 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.834495 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.886763 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.930488 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:19.976015 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.025939 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.07408 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.1192 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.174166 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.226108 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.276073 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.332226 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.383712 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.43526 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.485872 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456150800u | 1 | 2025-06-13 14:56:20.539804 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.585811 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.639229 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.682896 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.73628 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.790419 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.837404 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.889292 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.943046 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:20.99479 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:21.0513 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:21.100788 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:21.155013 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306251456150900u | 1 | 2025-06-13 14:56:21.203804 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.260062 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.313835 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.360396 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.422411 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.472959 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.52898 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.591871 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.645707 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306251456151000u | 1 | 2025-06-13 14:56:21.685396 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306251456151100u | 1 | 2025-06-13 14:56:21.732649 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306251456151200u | 1 | 2025-06-13 14:56:21.786616 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306251456151200u | 1 | 2025-06-13 14:56:21.831445 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306251456151200u | 1 | 2025-06-13 14:56:21.900383 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306251456151200u | 1 | 2025-06-13 14:56:21.958144 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306251456151300u | 1 | 2025-06-13 14:56:22.017909 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306251456151300u | 1 | 2025-06-13 14:56:22.06595 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306251456151300u | 1 | 2025-06-13 14:56:22.114873 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:22.783977 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:22.837555 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:22.894186 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:22.950141 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.008272 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.063591 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.117465 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.159321 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.198276 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.247231 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.293823 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.33917 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306251456221400u | 1 | 2025-06-13 14:56:23.385742 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.432504 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.478659 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.535004 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.592213 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.641506 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.690429 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.734846 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306251456221500u | 1 | 2025-06-13 14:56:23.781786 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306251456221600u | 1 | 2025-06-13 14:56:23.828455 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306251456221600u | 1 | 2025-06-13 14:56:23.875759 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306251456221601u | 1 | 2025-06-13 14:56:23.921216 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306251456221601u | 1 | 2025-06-13 14:56:23.970646 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306251456221700u | 1 | 2025-06-13 14:56:24.022297 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306251456221700u | 1 | 2025-06-13 14:56:24.071834 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306251456221700u | 1 | 2025-06-13 14:56:24.122384 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.174821 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.227782 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.278642 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.327986 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.375529 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.429628 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.479034 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.533436 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306251456221701u | 1 | 2025-06-13 14:56:24.580034 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306251456251600u | 1 | 2025-06-13 14:56:25.210539 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306251456251600u | 1 | 2025-06-13 14:56:25.833424 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306251456251600u | 1 | 2025-06-13 14:56:25.8972 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-drools-pdp | Waiting for pap port 6969... policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-drools-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-drools-pdp | Waiting for kafka port 9092... policy-drools-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-drools-pdp | + operation=boot policy-drools-pdp | + dockerBoot policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- policy-drools-pdp | + echo '-- dockerBoot --' policy-drools-pdp | + set -x policy-drools-pdp | + set -e policy-drools-pdp | + configure policy-drools-pdp | -- dockerBoot -- policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- configure --' policy-drools-pdp | + set -x policy-drools-pdp | + reload policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- reload --' policy-drools-pdp | + set -x policy-drools-pdp | -- configure -- policy-drools-pdp | -- reload -- policy-drools-pdp | + systemConfs policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- systemConfs --' policy-drools-pdp | -- systemConfs -- policy-drools-pdp | + set -x policy-drools-pdp | + local confName policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' policy-drools-pdp | + return 0 policy-drools-pdp | + maven policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- maven --' policy-drools-pdp | -- maven -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] policy-drools-pdp | + features policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- features --' policy-drools-pdp | + set -x policy-drools-pdp | -- features -- policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' policy-drools-pdp | + return 0 policy-drools-pdp | + security policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- security --' policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] policy-drools-pdp | -- security -- policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] policy-drools-pdp | + serverConfig properties policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=properties' policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config policy-drools-pdp | + serverConfig xml policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=xml' policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' policy-drools-pdp | + return 0 policy-drools-pdp | + serverConfig json policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | -- serverConfig -- policy-drools-pdp | + echo '-- serverConfig --' policy-drools-pdp | + set -x policy-drools-pdp | + local 'configExtSuffix=json' policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' policy-drools-pdp | + return 0 policy-drools-pdp | + scripts pre.sh policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- scripts --' policy-drools-pdp | + set -x policy-drools-pdp | -- scripts -- policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + policy exec policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller policy-drools-pdp | + OPERATION=none policy-drools-pdp | + '[' -z exec ] policy-drools-pdp | + OPERATION=exec policy-drools-pdp | + shift policy-drools-pdp | + '[' -z ] policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + policy_exec policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- policy_exec --' policy-drools-pdp | -- policy_exec -- policy-drools-pdp | + set -x policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + check_x_file bin/policy-management-controller policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- check_x_file --' policy-drools-pdp | -- check_x_file -- policy-drools-pdp | + set -x policy-drools-pdp | + FILE=bin/policy-management-controller policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] policy-drools-pdp | + return 0 policy-drools-pdp | + bin/policy-management-controller exec policy-drools-pdp | -- bin/policy-management-controller exec -- policy-drools-pdp | + _DIR=/opt/app/policy policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] policy-drools-pdp | + CONTROLLER=policy-management-controller policy-drools-pdp | + RETVAL=0 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID policy-drools-pdp | + exec_start policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- exec_start --' policy-drools-pdp | + set -x policy-drools-pdp | + status policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- status --' policy-drools-pdp | -- exec_start -- policy-drools-pdp | -- status -- policy-drools-pdp | + set -x policy-drools-pdp | + '[' -f /opt/app/policy/PID ] policy-drools-pdp | + '[' true ] policy-drools-pdp | + pidof -s java policy-drools-pdp | Policy Management (no pidfile) is NOT running policy-drools-pdp | + _PID= policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' policy-drools-pdp | + _RUNNING=0 policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + RETVAL=1 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' policy-drools-pdp | + '[' 0 '=' 1 ] policy-drools-pdp | + preRunning policy-drools-pdp | + '[' y '=' y ] policy-drools-pdp | + echo '-- preRunning --' policy-drools-pdp | -- preRunning -- policy-drools-pdp | + set -x policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar+ xargs /opt/app/policy/lib/angus-activation-2.0.2.jar -I X printf ':%s' X policy-drools-pdp | /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.13.0.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.15.11.jar /opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.48.3.jar /opt/app/policy/lib/classgraph-4.8.179.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.10.1.jar /opt/app/policy/lib/commons-cli-1.9.0.jar /opt/app/policy/lib/commons-codec-1.18.0.jar /opt/app/policy/lib/commons-collections-3.2.2.jar /opt/app/policy/lib/commons-collections4-4.5.0-M3.jar /opt/app/policy/lib/commons-configuration2-2.11.0.jar /opt/app/policy/lib/commons-digester-2.1.jar /opt/app/policy/lib/commons-io-2.18.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.17.0.jar /opt/app/policy/lib/commons-logging-1.3.5.jar /opt/app/policy/lib/commons-net-3.11.1.jar /opt/app/policy/lib/commons-text-1.13.0.jar /opt/app/policy/lib/commons-validator-1.8.0.jar /opt/app/policy/lib/core-0.12.4.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.36.0.jar /opt/app/policy/lib/failureaccess-1.0.3.jar /opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-2.12.1.jar /opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.4.6-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/handy-uri-templates-2.1.8.jar /opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar /opt/app/policy/lib/hibernate-core-6.6.16.Final.jar /opt/app/policy/lib/hk2-api-3.0.6.jar /opt/app/policy/lib/hk2-locator-3.0.6.jar /opt/app/policy/lib/hk2-utils-3.0.6.jar /opt/app/policy/lib/httpclient-4.5.13.jar /opt/app/policy/lib/httpcore-4.4.15.jar /opt/app/policy/lib/icu4j-74.2.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-3.0.0.jar /opt/app/policy/lib/jackson-annotations-2.18.3.jar /opt/app/policy/lib/jackson-core-2.18.3.jar /opt/app/policy/lib/jackson-databind-2.18.3.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar /opt/app/policy/lib/jakarta.activation-api-2.1.3.jar /opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-2.6.1.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.1.1.jar /opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-3.2.0.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.30.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar /opt/app/policy/lib/jcodings-1.0.58.jar /opt/app/policy/lib/jersey-client-3.1.10.jar /opt/app/policy/lib/jersey-common-3.1.10.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar /opt/app/policy/lib/jersey-hk2-3.1.10.jar /opt/app/policy/lib/jersey-server-3.1.10.jar /opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar /opt/app/policy/lib/jetty-http-12.0.21.jar /opt/app/policy/lib/jetty-io-12.0.21.jar /opt/app/policy/lib/jetty-security-12.0.21.jar /opt/app/policy/lib/jetty-server-12.0.21.jar /opt/app/policy/lib/jetty-session-12.0.21.jar /opt/app/policy/lib/jetty-util-12.0.21.jar /opt/app/policy/lib/joda-time-2.10.2.jar /opt/app/policy/lib/joni-2.2.1.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsoup-1.17.2.jar /opt/app/policy/lib/jspecify-1.0.0.jar /opt/app/policy/lib/kafka-clients-3.9.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.5.18.jar /opt/app/policy/lib/logback-core-1.5.18.jar /opt/app/policy/lib/lombok-1.18.38.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.43.0.jar /opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar /opt/app/policy/lib/opentelemetry-context-1.43.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.6.0.jar /opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.5.jar /opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.8.jar /opt/app/policy/lib/slf4j-api-2.0.17.jar /opt/app/policy/lib/snakeyaml-2.4.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.29.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' policy-drools-pdp | + '[' -z /opt/app/policy ] policy-drools-pdp | + set -a policy-drools-pdp | + POLICY_HOME=/opt/app/policy policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' policy-drools-pdp | + '[' -d /opt/app/policy/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] policy-drools-pdp | + : policy-drools-pdp | + '[' -d /home/policy/bin ] policy-drools-pdp | + set +a policy-drools-pdp | + /opt/app/policy/bin/configure-maven policy-drools-pdp | + export 'M2_HOME=/home/policy/.m2' policy-drools-pdp | + mkdir -p /home/policy/.m2 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' policy-drools-pdp | + cd /opt/app/policy policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main policy-drools-pdp | [2025-06-13T14:56:51.047+00:00|INFO|LifecycleFsm|main] The mandatory Policy Types are []. Compliance is true policy-drools-pdp | [2025-06-13T14:56:51.050+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [org.onap.policy.drools.lifecycle.LifecycleFeature@2235eaab] policy-drools-pdp | [2025-06-13T14:56:51.058+00:00|INFO|PolicyContainer|main] PolicyContainer.main: configDir=config policy-drools-pdp | [2025-06-13T14:56:51.059+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-13T14:56:51.067+00:00|INFO|IndexedKafkaTopicSourceFactory|main] IndexedKafkaTopicSourceFactory []: no topic for KAFKA Source policy-drools-pdp | [2025-06-13T14:56:51.069+00:00|INFO|IndexedKafkaTopicSinkFactory|main] IndexedKafkaTopicSinkFactory []: no topic for KAFKA Sink policy-drools-pdp | [2025-06-13T14:56:51.423+00:00|INFO|PolicyEngineManager|main] lock manager is org.onap.policy.drools.system.internal.SimpleLockManager@376a312c policy-drools-pdp | [2025-06-13T14:56:51.433+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START policy-drools-pdp | [2025-06-13T14:56:51.449+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-drools-pdp | [2025-06-13T14:56:51.450+00:00|INFO|JettyServletServer|CONFIG-9696] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-drools-pdp | [2025-06-13T14:56:51.460+00:00|INFO|Server|CONFIG-9696] jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 policy-drools-pdp | [2025-06-13T14:56:51.502+00:00|INFO|DefaultSessionIdManager|CONFIG-9696] Session workerName=node0 policy-drools-pdp | [2025-06-13T14:56:51.518+00:00|INFO|ContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. policy-drools-pdp | Jun 13, 2025 2:56:52 PM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. policy-drools-pdp | [2025-06-13T14:56:52.408+00:00|INFO|GsonMessageBodyHandler|CONFIG-9696] Using GSON for REST calls policy-drools-pdp | [2025-06-13T14:56:52.408+00:00|INFO|JacksonHandler|CONFIG-9696] Using GSON with Jackson behaviors for REST calls policy-drools-pdp | [2025-06-13T14:56:52.410+00:00|INFO|YamlMessageBodyHandler|CONFIG-9696] Accepting YAML for REST calls policy-drools-pdp | [2025-06-13T14:56:52.571+00:00|INFO|ServletContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} policy-drools-pdp | [2025-06-13T14:56:52.579+00:00|INFO|AbstractConnector|CONFIG-9696] Started CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696} policy-drools-pdp | [2025-06-13T14:56:52.581+00:00|INFO|Server|CONFIG-9696] Started oejs.Server@3276732{STARTING}[12.0.21,sto=0] @2423ms policy-drools-pdp | [2025-06-13T14:56:52.581+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 8869 ms. policy-drools-pdp | [2025-06-13T14:56:52.587+00:00|INFO|LifecycleFsm|main] lifecycle event: start engine policy-drools-pdp | [2025-06-13T14:56:52.754+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-7d912964-8778-42e7-b0af-b72511d03f65-1 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 7d912964-8778-42e7-b0af-b72511d03f65 policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-13T14:56:52.793+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-13T14:56:52.869+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-13T14:56:52.869+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-13T14:56:52.869+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826612867 policy-drools-pdp | [2025-06-13T14:56:52.871+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-1, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-13T14:56:52.871+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7d912964-8778-42e7-b0af-b72511d03f65, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e6308a9 policy-drools-pdp | [2025-06-13T14:56:52.885+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7d912964-8778-42e7-b0af-b72511d03f65, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-13T14:56:52.886+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-drools-pdp | allow.auto.create.topics = true policy-drools-pdp | auto.commit.interval.ms = 5000 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | auto.offset.reset = latest policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | check.crcs = true policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = consumer-7d912964-8778-42e7-b0af-b72511d03f65-2 policy-drools-pdp | client.rack = policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | default.api.timeout.ms = 60000 policy-drools-pdp | enable.auto.commit = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | exclude.internal.topics = true policy-drools-pdp | fetch.max.bytes = 52428800 policy-drools-pdp | fetch.max.wait.ms = 500 policy-drools-pdp | fetch.min.bytes = 1 policy-drools-pdp | group.id = 7d912964-8778-42e7-b0af-b72511d03f65 policy-drools-pdp | group.instance.id = null policy-drools-pdp | group.protocol = classic policy-drools-pdp | group.remote.assignor = null policy-drools-pdp | heartbeat.interval.ms = 3000 policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | internal.leave.group.on.close = true policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-drools-pdp | isolation.level = read_uncommitted policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | max.partition.fetch.bytes = 1048576 policy-drools-pdp | max.poll.interval.ms = 300000 policy-drools-pdp | max.poll.records = 500 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-drools-pdp | receive.buffer.bytes = 65536 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | session.timeout.ms = 45000 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-drools-pdp | policy-drools-pdp | [2025-06-13T14:56:52.886+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-13T14:56:52.896+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-13T14:56:52.896+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-13T14:56:52.896+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826612896 policy-drools-pdp | [2025-06-13T14:56:52.896+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Subscribed to topic(s): policy-pdp-pap policy-drools-pdp | [2025-06-13T14:56:52.897+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7d912964-8778-42e7-b0af-b72511d03f65, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-drools-pdp | [2025-06-13T14:56:52.901+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=31455eef-6062-4955-96fa-dbad38e156f2, alive=false, publisher=null]]: starting policy-drools-pdp | [2025-06-13T14:56:52.913+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-drools-pdp | acks = -1 policy-drools-pdp | auto.include.jmx.reporter = true policy-drools-pdp | batch.size = 16384 policy-drools-pdp | bootstrap.servers = [kafka:9092] policy-drools-pdp | buffer.memory = 33554432 policy-drools-pdp | client.dns.lookup = use_all_dns_ips policy-drools-pdp | client.id = producer-1 policy-drools-pdp | compression.gzip.level = -1 policy-drools-pdp | compression.lz4.level = 9 policy-drools-pdp | compression.type = none policy-drools-pdp | compression.zstd.level = 3 policy-drools-pdp | connections.max.idle.ms = 540000 policy-drools-pdp | delivery.timeout.ms = 120000 policy-drools-pdp | enable.idempotence = true policy-drools-pdp | enable.metrics.push = true policy-drools-pdp | interceptor.classes = [] policy-drools-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | linger.ms = 0 policy-drools-pdp | max.block.ms = 60000 policy-drools-pdp | max.in.flight.requests.per.connection = 5 policy-drools-pdp | max.request.size = 1048576 policy-drools-pdp | metadata.max.age.ms = 300000 policy-drools-pdp | metadata.max.idle.ms = 300000 policy-drools-pdp | metadata.recovery.strategy = none policy-drools-pdp | metric.reporters = [] policy-drools-pdp | metrics.num.samples = 2 policy-drools-pdp | metrics.recording.level = INFO policy-drools-pdp | metrics.sample.window.ms = 30000 policy-drools-pdp | partitioner.adaptive.partitioning.enable = true policy-drools-pdp | partitioner.availability.timeout.ms = 0 policy-drools-pdp | partitioner.class = null policy-drools-pdp | partitioner.ignore.keys = false policy-drools-pdp | receive.buffer.bytes = 32768 policy-drools-pdp | reconnect.backoff.max.ms = 1000 policy-drools-pdp | reconnect.backoff.ms = 50 policy-drools-pdp | request.timeout.ms = 30000 policy-drools-pdp | retries = 2147483647 policy-drools-pdp | retry.backoff.max.ms = 1000 policy-drools-pdp | retry.backoff.ms = 100 policy-drools-pdp | sasl.client.callback.handler.class = null policy-drools-pdp | sasl.jaas.config = null policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-drools-pdp | sasl.kerberos.service.name = null policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-drools-pdp | sasl.login.callback.handler.class = null policy-drools-pdp | sasl.login.class = null policy-drools-pdp | sasl.login.connect.timeout.ms = null policy-drools-pdp | sasl.login.read.timeout.ms = null policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 policy-drools-pdp | sasl.mechanism = GSSAPI policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-drools-pdp | sasl.oauthbearer.expected.audience = null policy-drools-pdp | sasl.oauthbearer.expected.issuer = null policy-drools-pdp | sasl.oauthbearer.header.urlencode = false policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null policy-drools-pdp | security.protocol = PLAINTEXT policy-drools-pdp | security.providers = null policy-drools-pdp | send.buffer.bytes = 131072 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 policy-drools-pdp | ssl.cipher.suites = null policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-drools-pdp | ssl.endpoint.identification.algorithm = https policy-drools-pdp | ssl.engine.factory.class = null policy-drools-pdp | ssl.key.password = null policy-drools-pdp | ssl.keymanager.algorithm = SunX509 policy-drools-pdp | ssl.keystore.certificate.chain = null policy-drools-pdp | ssl.keystore.key = null policy-drools-pdp | ssl.keystore.location = null policy-drools-pdp | ssl.keystore.password = null policy-drools-pdp | ssl.keystore.type = JKS policy-drools-pdp | ssl.protocol = TLSv1.3 policy-drools-pdp | ssl.provider = null policy-drools-pdp | ssl.secure.random.implementation = null policy-drools-pdp | ssl.trustmanager.algorithm = PKIX policy-drools-pdp | ssl.truststore.certificates = null policy-drools-pdp | ssl.truststore.location = null policy-drools-pdp | ssl.truststore.password = null policy-drools-pdp | ssl.truststore.type = JKS policy-drools-pdp | transaction.timeout.ms = 60000 policy-drools-pdp | transactional.id = null policy-drools-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-drools-pdp | policy-drools-pdp | [2025-06-13T14:56:52.914+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-drools-pdp | [2025-06-13T14:56:52.924+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-drools-pdp | [2025-06-13T14:56:52.942+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-drools-pdp | [2025-06-13T14:56:52.942+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-drools-pdp | [2025-06-13T14:56:52.942+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826612942 policy-drools-pdp | [2025-06-13T14:56:52.944+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=31455eef-6062-4955-96fa-dbad38e156f2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-drools-pdp | [2025-06-13T14:56:52.949+00:00|INFO|LifecycleStateDefault|main] LifecycleStateTerminated(): state-change from TERMINATED to PASSIVE policy-drools-pdp | [2025-06-13T14:56:52.949+00:00|INFO|LifecycleFsm|pool-2-thread-1] lifecycle event: status policy-drools-pdp | [2025-06-13T14:56:52.950+00:00|INFO|MdcTransactionImpl|main] policy-drools-pdp | [2025-06-13T14:56:52.954+00:00|INFO|Main|main] Started policy-drools-pdp service successfully. policy-drools-pdp | [2025-06-13T14:56:52.965+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-drools-pdp | [] policy-drools-pdp | [2025-06-13T14:56:53.281+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-drools-pdp | [2025-06-13T14:56:53.281+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-drools-pdp | [2025-06-13T14:56:53.282+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-drools-pdp | [2025-06-13T14:56:53.283+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-drools-pdp | [2025-06-13T14:56:53.289+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] (Re-)joining group policy-drools-pdp | [2025-06-13T14:56:53.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Request joining group due to: need to re-join with the given member-id: consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f policy-drools-pdp | [2025-06-13T14:56:53.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] (Re-)joining group policy-drools-pdp | [2025-06-13T14:56:56.308+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Successfully joined group with generation Generation{generationId=1, memberId='consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f', protocol='range'} policy-drools-pdp | [2025-06-13T14:56:56.319+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Finished assignment for group at generation 1: {consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f=Assignment(partitions=[policy-pdp-pap-0])} policy-drools-pdp | [2025-06-13T14:56:56.328+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Successfully synced group in generation Generation{generationId=1, memberId='consumer-7d912964-8778-42e7-b0af-b72511d03f65-2-b63dc58e-f4c1-41ac-930f-637126c1787f', protocol='range'} policy-drools-pdp | [2025-06-13T14:56:56.328+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-drools-pdp | [2025-06-13T14:56:56.330+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Adding newly assigned partitions: policy-pdp-pap-0 policy-drools-pdp | [2025-06-13T14:56:56.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Found no committed offset for partition policy-pdp-pap-0 policy-drools-pdp | [2025-06-13T14:56:56.354+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7d912964-8778-42e7-b0af-b72511d03f65-2, groupId=7d912964-8778-42e7-b0af-b72511d03f65] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-13T14:56:39.709+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-13T14:56:39.711+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-13T14:56:41.203+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-13T14:56:41.292+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-13T14:56:42.296+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-13T14:56:42.311+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:56:42.312+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-13T14:56:42.312+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-13T14:56:42.362+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-13T14:56:42.363+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2590 ms policy-pap | [2025-06-13T14:56:42.842+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-13T14:56:42.940+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-13T14:56:43.023+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-13T14:56:43.447+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-13T14:56:43.495+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-13T14:56:43.740+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-13T14:56:43.742+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-13T14:56:43.840+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-13T14:56:45.941+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-13T14:56:45.945+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-13T14:56:47.221+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:56:47.282+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:47.425+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:47.425+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:47.425+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826607424 policy-pap | [2025-06-13T14:56:47.427+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-1, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:56:47.428+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:56:47.429+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:47.436+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:47.436+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:47.436+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826607435 policy-pap | [2025-06-13T14:56:47.436+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:56:47.768+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-13T14:56:47.896+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-13T14:56:47.977+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-13T14:56:48.206+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-13T14:56:48.963+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-13T14:56:49.083+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:56:49.103+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-13T14:56:49.130+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-13T14:56:49.131+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-13T14:56:49.132+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-13T14:56:49.132+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-13T14:56:49.133+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-13T14:56:49.133+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-13T14:56:49.133+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-13T14:56:49.135+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@76ec6ae0 policy-pap | [2025-06-13T14:56:49.145+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:56:49.160+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:56:49.160+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:49.171+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:49.171+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:49.171+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826609171 policy-pap | [2025-06-13T14:56:49.172+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:56:49.172+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-13T14:56:49.172+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=142c45b4-580a-408f-abb0-f0f86f7a4d65, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@48a5ef5c policy-pap | [2025-06-13T14:56:49.173+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=142c45b4-580a-408f-abb0-f0f86f7a4d65, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:56:49.173+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:56:49.173+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:49.180+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:49.180+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:49.180+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826609180 policy-pap | [2025-06-13T14:56:49.180+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:56:49.181+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-13T14:56:49.181+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=142c45b4-580a-408f-abb0-f0f86f7a4d65, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:56:49.181+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:56:49.181+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=75ed316f-fc21-47d4-a6e1-65f566a57b67, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:56:49.195+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:56:49.196+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:49.213+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:56:49.232+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:49.233+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:49.233+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826609232 policy-pap | [2025-06-13T14:56:49.233+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=75ed316f-fc21-47d4-a6e1-65f566a57b67, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:56:49.233+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fc5deb63-0b4b-4916-bc9b-09d58cf3a2f2, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:56:49.233+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:56:49.234+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:56:49.234+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826609239 policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fc5deb63-0b4b-4916-bc9b-09d58cf3a2f2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-13T14:56:49.239+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-13T14:56:49.240+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-13T14:56:49.240+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-13T14:56:49.248+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-13T14:56:49.248+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-13T14:56:49.248+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-13T14:56:49.248+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-13T14:56:49.249+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-13T14:56:49.249+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-13T14:56:49.250+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.36 seconds (process running for 10.942) policy-pap | [2025-06-13T14:56:49.249+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-13T14:56:49.794+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T14:56:49.795+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-pap | [2025-06-13T14:56:49.795+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-pap | [2025-06-13T14:56:49.796+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-pap | [2025-06-13T14:56:49.846+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-13T14:56:49.846+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-13T14:56:49.848+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:49.848+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Cluster ID: qXUltZKbTyOIemKjVEFwng policy-pap | [2025-06-13T14:56:49.973+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T14:56:49.976+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:50.194+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:50.211+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:50.566+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:50.629+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:56:51.330+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:56:51.335+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:56:51.364+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e policy-pap | [2025-06-13T14:56:51.364+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:56:51.494+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:56:51.496+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] (Re-)joining group policy-pap | [2025-06-13T14:56:51.505+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Request joining group due to: need to re-join with the given member-id: consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960 policy-pap | [2025-06-13T14:56:51.505+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] (Re-)joining group policy-pap | [2025-06-13T14:56:54.389+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e', protocol='range'} policy-pap | [2025-06-13T14:56:54.399+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:56:54.429+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-81025475-161f-4d8b-addf-52cd7d2bf74e', protocol='range'} policy-pap | [2025-06-13T14:56:54.430+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:56:54.433+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:56:54.449+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:56:54.466+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:56:54.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960', protocol='range'} policy-pap | [2025-06-13T14:56:54.510+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Finished assignment for group at generation 1: {consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:56:54.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3-0c8a57bd-f632-4f6c-8169-ecae6d15c960', protocol='range'} policy-pap | [2025-06-13T14:56:54.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:56:54.517+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:56:54.519+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:56:54.521+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a-3, groupId=acdc8c1d-1d7d-4b8e-999c-4ff83ce8e37a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:57:41.612+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-13T14:57:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-13T14:57:41.615+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-13 14:56:12.161 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:56:12.163 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:56:12.168 UTC [51] LOG: database system was shut down at 2025-06-13 14:56:11 UTC postgres | 2025-06-13 14:56:12.174 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down....2025-06-13 14:56:13.569 UTC [48] LOG: received fast shutdown request postgres | 2025-06-13 14:56:13.571 UTC [48] LOG: aborting any active transactions postgres | 2025-06-13 14:56:13.574 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-13 14:56:13.576 UTC [49] LOG: shutting down postgres | 2025-06-13 14:56:13.578 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-13 14:56:14.271 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.476 s, sync=0.186 s, total=0.695 s; sync files=1788, longest=0.016 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-13 14:56:14.284 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-13 14:56:14.399 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:56:14.399 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-13 14:56:14.399 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-13 14:56:14.404 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:56:14.410 UTC [101] LOG: database system was shut down at 2025-06-13 14:56:14 UTC postgres | 2025-06-13 14:56:14.424 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-13T14:56:09.132Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-13T14:56:09.132Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-13T14:56:09.132Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-13T14:56:09.134Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-13T14:56:09.139Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-13T14:56:09.140Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-13T14:56:09.142Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-13T14:56:09.142Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-13T14:56:09.146Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-13T14:56:09.146Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.37µs prometheus | time=2025-06-13T14:56:09.146Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-13T14:56:09.147Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=1.104815ms prometheus | time=2025-06-13T14:56:09.147Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=51.253µs wal_replay_duration=1.200521ms wbl_replay_duration=260ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.37µs total_replay_duration=1.502771ms prometheus | time=2025-06-13T14:56:09.151Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-13T14:56:09.151Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-13T14:56:09.151Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-13T14:56:09.153Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-13T14:56:09.153Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.48µs remote_storage=3.19µs web_handler=1.13µs query_engine=1.87µs scrape=351.964µs scrape_sd=176.902µs notify=293.98µs notify_sd=28.762µs rules=2.031µs tracing=6.41µs filename=/etc/prometheus/prometheus.yml totalDuration=1.663532ms prometheus | time=2025-06-13T14:56:09.153Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-13T14:56:09.153Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-13 14:56:11,145] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,147] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,147] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,147] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,148] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,149] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:11,150] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:11,150] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:11,150] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-13 14:56:11,152] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-13 14:56:11,152] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,153] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,153] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,153] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,153] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:11,153] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-13 14:56:11,163] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-13 14:56:11,165] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:56:11,165] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:56:11,167] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:11,175] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,175] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,176] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,177] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,178] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,178] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,178] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,178] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,178] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,179] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-13 14:56:11,179] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,179] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,181] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:56:11,181] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,182] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:11,184] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,184] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,185] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:56:11,185] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:56:11,185] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,207] INFO Logging initialized @407ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-13 14:56:11,282] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:11,282] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:11,310] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:56:11,361] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:11,361] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:11,362] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:11,365] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-13 14:56:11,375] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:11,387] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-13 14:56:11,387] INFO Started @591ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:56:11,387] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-13 14:56:11,392] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:56:11,393] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:56:11,399] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:56:11,401] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:56:11,414] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:56:11,414] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:56:11,414] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:11,414] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:11,419] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-13 14:56:11,419] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:11,422] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:11,423] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:11,423] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:11,432] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-13 14:56:11,432] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-13 14:56:11,453] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-13 14:56:11,457] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-13 14:56:12,522] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-csit Stopping Container policy-drools-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-drools-pdp Stopped Container policy-drools-pdp Removing Container policy-drools-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2105 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins4758388126394491394.sh ---> sysstat.sh [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins1677300322024123361.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp ']' + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/archives/ [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins9517893808698263389.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ijNI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ijNI/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins9910708431055038033.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp@tmp/config5747439943787524542tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins721199517963503766.sh ---> create-netrc.sh [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins12826437929818479757.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ijNI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ijNI/bin to PATH [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins12176988663033866334.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash /tmp/jenkins6361621924482275206.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ijNI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ijNI/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-drools-pdp-master-project-csit-verify-drools-pdp] $ /bin/bash -l /tmp/jenkins897832453536468700.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-verify-drools-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ijNI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ijNI/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-verify-drools-pdp/812 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-20901 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 881 23665 0 7620 30830 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:63:01:90 brd ff:ff:ff:ff:ff:ff inet 10.30.107.130/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86066sec preferred_lft 86066sec inet6 fe80::f816:3eff:fe63:190/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:20:d8:22:28 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:20ff:fed8:2228/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20901) 06/13/25 _x86_64_ (8 CPU) 14:53:49 LINUX RESTART (8 CPU) 14:54:01 tps rtps wtps bread/s bwrtn/s 14:55:01 378.04 73.85 304.18 5313.51 111168.54 14:56:01 478.61 20.69 457.91 2287.64 231296.77 14:57:01 370.19 2.88 367.31 423.26 61946.61 14:58:01 218.41 0.42 218.00 39.86 33888.09 14:59:01 76.65 1.35 75.30 102.12 2482.12 Average: 304.39 19.84 284.55 1633.30 88161.19 14:54:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:55:01 30173804 31706640 2765416 8.40 67184 1776728 1388140 4.08 846336 1633872 159856 14:56:01 24891120 31643496 8048100 24.43 150580 6672188 1723168 5.07 1017796 6447476 1002800 14:57:01 22634684 29690060 10304536 31.28 165832 6962376 8394968 24.70 3190372 6447216 2160 14:58:01 22029024 29617848 10910196 33.12 206576 7402668 8670504 25.51 3355796 6825496 1908 14:59:01 24263252 31599152 8675968 26.34 207932 7143588 1622472 4.77 1442284 6581624 11560 Average: 24798377 30851439 8140843 24.71 159621 5991510 4359850 12.83 1970517 5587137 235657 14:54:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:55:01 ens3 591.93 376.40 1711.24 84.37 0.00 0.00 0.00 0.00 14:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:55:01 lo 1.93 1.93 0.21 0.21 0.00 0.00 0.00 0.00 14:56:01 ens3 1315.19 774.36 38097.82 65.49 0.00 0.00 0.00 0.00 14:56:01 br-28054be1ecb6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:56:01 lo 13.66 13.66 1.25 1.25 0.00 0.00 0.00 0.00 14:57:01 veth069f2a4 5.92 5.48 0.73 0.65 0.00 0.00 0.00 0.00 14:57:01 veth16de970 150.31 172.90 27.86 26.75 0.00 0.00 0.00 0.00 14:57:01 veth223cebc 91.87 91.60 16.03 18.63 0.00 0.00 0.00 0.00 14:57:01 vethebc7e0d 1.72 1.92 0.18 0.18 0.00 0.00 0.00 0.00 14:58:01 veth069f2a4 6.23 9.20 1.32 0.71 0.00 0.00 0.00 0.00 14:58:01 veth16de970 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 14:58:01 veth223cebc 0.18 0.23 0.54 0.02 0.00 0.00 0.00 0.00 14:58:01 vethebc7e0d 3.30 4.87 0.55 0.38 0.00 0.00 0.00 0.00 14:59:01 ens3 2211.26 1376.02 42397.70 195.30 0.00 0.00 0.00 0.00 14:59:01 docker0 125.26 167.11 8.09 1346.96 0.00 0.00 0.00 0.00 14:59:01 lo 25.50 25.50 2.31 2.31 0.00 0.00 0.00 0.00 Average: ens3 441.00 274.14 8478.14 38.96 0.00 0.00 0.00 0.00 Average: docker0 25.05 33.42 1.62 269.38 0.00 0.00 0.00 0.00 Average: lo 4.50 4.50 0.41 0.41 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20901) 06/13/25 _x86_64_ (8 CPU) 14:53:49 LINUX RESTART (8 CPU) 14:54:01 CPU %user %nice %system %iowait %steal %idle 14:55:01 all 10.04 0.00 1.45 3.23 0.04 85.24 14:55:01 0 5.22 0.00 1.25 0.43 0.05 93.04 14:55:01 1 10.67 0.00 1.45 0.63 0.07 87.18 14:55:01 2 26.74 0.00 2.09 0.93 0.07 70.17 14:55:01 3 3.63 0.00 1.92 0.30 0.03 94.12 14:55:01 4 11.95 0.00 1.35 5.13 0.03 81.53 14:55:01 5 10.71 0.00 1.03 4.89 0.03 83.34 14:55:01 6 5.23 0.00 1.54 8.84 0.03 84.36 14:55:01 7 6.15 0.00 0.95 4.71 0.03 88.16 14:56:01 all 18.91 0.00 7.58 7.16 0.07 66.27 14:56:01 0 15.85 0.00 6.71 2.46 0.05 74.93 14:56:01 1 19.93 0.00 7.20 1.63 0.05 71.18 14:56:01 2 27.09 0.00 8.69 11.71 0.08 52.44 14:56:01 3 17.47 0.00 7.15 1.26 0.07 74.05 14:56:01 4 16.48 0.00 7.22 6.24 0.07 69.99 14:56:01 5 23.03 0.00 7.56 2.66 0.07 66.68 14:56:01 6 15.69 0.00 8.79 18.20 0.07 57.25 14:56:01 7 15.75 0.00 7.34 13.14 0.08 63.68 14:57:01 all 27.81 0.00 3.74 2.74 0.09 65.62 14:57:01 0 27.55 0.00 3.72 0.72 0.08 67.92 14:57:01 1 25.39 0.00 3.67 2.86 0.10 67.98 14:57:01 2 29.59 0.00 4.12 1.44 0.08 64.76 14:57:01 3 33.70 0.00 3.92 0.65 0.08 61.64 14:57:01 4 28.45 0.00 3.73 2.87 0.08 64.87 14:57:01 5 29.93 0.00 3.65 2.97 0.08 63.37 14:57:01 6 24.60 0.00 3.97 9.32 0.10 62.00 14:57:01 7 23.24 0.00 3.18 1.09 0.10 72.39 14:58:01 all 8.52 0.00 2.49 1.16 0.08 87.74 14:58:01 0 10.22 0.00 2.33 0.64 0.08 86.74 14:58:01 1 8.26 0.00 2.53 2.83 0.05 86.33 14:58:01 2 6.56 0.00 3.25 2.78 0.08 87.32 14:58:01 3 11.55 0.00 1.78 0.22 0.07 86.39 14:58:01 4 7.55 0.00 2.52 1.54 0.08 88.30 14:58:01 5 7.08 0.00 1.89 0.74 0.07 90.22 14:58:01 6 9.03 0.00 3.65 0.30 0.10 86.92 14:58:01 7 7.95 0.00 2.01 0.25 0.08 89.70 14:59:01 all 5.66 0.00 0.94 0.19 0.04 93.16 14:59:01 0 2.76 0.00 0.86 0.86 0.05 95.46 14:59:01 1 1.57 0.00 0.84 0.07 0.05 97.48 14:59:01 2 5.54 0.00 1.10 0.05 0.02 93.30 14:59:01 3 13.47 0.00 1.15 0.08 0.03 85.26 14:59:01 4 1.50 0.00 0.68 0.25 0.03 97.53 14:59:01 5 14.72 0.00 1.18 0.15 0.05 83.89 14:59:01 6 4.24 0.00 0.98 0.07 0.05 94.66 14:59:01 7 1.49 0.00 0.72 0.03 0.03 97.73 Average: all 14.17 0.00 3.24 2.89 0.06 79.64 Average: 0 12.29 0.00 2.97 1.02 0.06 83.65 Average: 1 13.15 0.00 3.13 1.60 0.06 82.05 Average: 2 19.09 0.00 3.84 3.37 0.07 73.64 Average: 3 15.95 0.00 3.18 0.50 0.06 80.31 Average: 4 13.17 0.00 3.09 3.20 0.06 80.47 Average: 5 17.07 0.00 3.05 2.28 0.06 77.54 Average: 6 11.74 0.00 3.78 7.33 0.07 77.09 Average: 7 10.89 0.00 2.84 3.84 0.07 82.36