Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-20975 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-NPJsoIw03hHq/agent.2069 SSH_AGENT_PID=2071 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13939384302638469769.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13939384302638469769.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 Commit message: "Remove VFC from docker compose and helm configurations" > git rev-list --no-walk 1e361efcd8a4b3caab4f41f34078024e85ac9d73 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8680403825507081097.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-1fkA lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-1fkA/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins1044944308728739529.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins4039090778750723394.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 1 60.2M 1 856k 0 0 3196k 0 0:00:19 --:--:-- 0:00:19 3196k 100 60.2M 100 60.2M 0 0 76.4M 0 --:--:-- --:--:-- --:--:-- 114M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp using postgres + Grafana/Prometheus apex-pdp Pulling kafka Pulling pap Pulling simulator Pulling api Pulling postgres Pulling policy-db-migrator Pulling prometheus Pulling grafana Pulling zookeeper Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer c124ba1a8b26 Waiting 1ec5fb03eaee Waiting 6394804c2196 Waiting d3165a332ae3 Waiting da9db072f522 Pulling fs layer 56aca8a42329 Pulling fs layer fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer 56aca8a42329 Waiting f243361b999b Pulling fs layer fbe227156a9a Waiting 7abf0dc59d35 Pulling fs layer 991de477d40a Pulling fs layer f243361b999b Waiting 7abf0dc59d35 Waiting 5efc16ba9cdc Pulling fs layer 991de477d40a Waiting 5efc16ba9cdc Waiting e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 0d92cad902ba Waiting 5e06c6bed798 Waiting 684be6598fc9 Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB d3165a332ae3 Download complete da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer ece604b40811 Pulling fs layer c01e672f2391 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 4ba79830ebce Waiting ece604b40811 Waiting d223479d7367 Waiting c01e672f2391 Waiting da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer 5179ab305f38 Pulling fs layer 18ce86a3284e Pulling fs layer 098efa8b34b7 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB e0a9246a993d Waiting 5179ab305f38 Waiting 18ce86a3284e Waiting 098efa8b34b7 Waiting 614e034e242f Pulling fs layer 614e034e242f Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer e60d9caeb0b8 Waiting f61a19743345 Waiting f18232174bc9 Waiting c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer b176d7edde70 Pulling fs layer 8af57d8c9f49 Waiting c49e0ee60bfb Waiting 384497dbce3b Waiting c53a11b7c6fc Waiting 055b9255fa03 Waiting e032d0a5e409 Waiting b176d7edde70 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 1e017ebebdbd Waiting 55f2b468da67 Waiting 5cfb27c10ea5 Waiting 82bfc142787e Waiting 40a5eed61bb0 Waiting 46baca71a4ef Waiting b0e0ef7895f4 Waiting e040ea11fa10 Waiting c0c90eeb8aca Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 2d429b9e73a6 Waiting 46eab5b44a35 Waiting c4d302cc468d Waiting 01e0882c90d9 Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB e27c75a98748 Waiting e73cb4a42719 Waiting da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB a83b68436f09 Waiting da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Verifying Checksum 787d6bee9571 Waiting 13ff0988aaea Waiting da9db072f522 Verifying Checksum da9db072f522 Verifying Checksum da9db072f522 Verifying Checksum da9db072f522 Verifying Checksum da9db072f522 Download complete 4b82842ab819 Waiting 7e568a0dc8fb Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 8f10199ed94b Waiting f3a82e9f1761 Waiting f963a77d2726 Waiting 79161a3f5362 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting eabd8714fec9 Waiting eca0188f477e Waiting 45fd2fec8a19 Waiting e444bcd4d577 Waiting da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 408012a7b118 Waiting 44986281b8b9 Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting 9fa9226be034 Waiting 1617e25568b2 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB da9db072f522 Extracting [==================> ] 1.311MB/3.624MB da9db072f522 Extracting [==================> ] 1.311MB/3.624MB da9db072f522 Extracting [==================> ] 1.311MB/3.624MB da9db072f522 Extracting [==================> ] 1.311MB/3.624MB da9db072f522 Extracting [==================> ] 1.311MB/3.624MB c124ba1a8b26 Downloading [======> ] 11.35MB/91.87MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 56aca8a42329 Downloading [===> ] 5.406MB/71.91MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB c124ba1a8b26 Downloading [==============> ] 27.03MB/91.87MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [===============================> ] 45.42MB/71.91MB 96e38c8865ba Downloading [===============================> ] 45.42MB/71.91MB 56aca8a42329 Downloading [=======> ] 11.35MB/71.91MB c124ba1a8b26 Downloading [=======================> ] 42.71MB/91.87MB 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB 56aca8a42329 Downloading [=============> ] 18.92MB/71.91MB c124ba1a8b26 Downloading [================================> ] 58.93MB/91.87MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete fbe227156a9a Downloading [> ] 146.4kB/14.63MB 56aca8a42329 Downloading [==================> ] 27.03MB/71.91MB c124ba1a8b26 Downloading [=========================================> ] 76.23MB/91.87MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB fbe227156a9a Downloading [==========================> ] 7.814MB/14.63MB 56aca8a42329 Downloading [=========================> ] 36.76MB/71.91MB c124ba1a8b26 Downloading [=================================================> ] 91.37MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete fbe227156a9a Verifying Checksum fbe227156a9a Download complete b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Download complete f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Verifying Checksum f243361b999b Download complete 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 991de477d40a Verifying Checksum 991de477d40a Download complete 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Verifying Checksum 56aca8a42329 Downloading [====================================> ] 51.9MB/71.91MB 5efc16ba9cdc Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 4ba79830ebce Downloading [> ] 539.6kB/166.8MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 56aca8a42329 Downloading [================================================> ] 69.75MB/71.91MB 56aca8a42329 Verifying Checksum 56aca8a42329 Download complete d223479d7367 Downloading [> ] 80.82kB/6.742MB dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB 4ba79830ebce Downloading [===> ] 10.81MB/166.8MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB d223479d7367 Downloading [=======================================> ] 5.324MB/6.742MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [=============> ] 20.54MB/76.12MB d223479d7367 Verifying Checksum d223479d7367 Download complete 4ba79830ebce Downloading [=======> ] 24.87MB/166.8MB ece604b40811 Downloading [==================================================>] 303B/303B ece604b40811 Verifying Checksum ece604b40811 Download complete 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB c01e672f2391 Downloading [> ] 539.6kB/263.6MB dcc0c3b2850c Downloading [========================> ] 36.76MB/76.12MB 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB 4ba79830ebce Downloading [============> ] 42.17MB/166.8MB 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB dcc0c3b2850c Downloading [==================================> ] 52.98MB/76.12MB 56aca8a42329 Extracting [======> ] 8.913MB/71.91MB c01e672f2391 Downloading [> ] 2.702MB/263.6MB 4ba79830ebce Downloading [=================> ] 58.39MB/166.8MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB dcc0c3b2850c Downloading [============================================> ] 67.58MB/76.12MB c01e672f2391 Downloading [> ] 4.324MB/263.6MB 4ba79830ebce Downloading [======================> ] 74.61MB/166.8MB 56aca8a42329 Extracting [==========> ] 14.48MB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB e0a9246a993d Downloading [> ] 539.6kB/71.91MB 4ba79830ebce Downloading [===========================> ] 90.83MB/166.8MB 56aca8a42329 Extracting [=============> ] 19.5MB/71.91MB c01e672f2391 Downloading [=> ] 8.109MB/263.6MB e0a9246a993d Downloading [=======> ] 10.27MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 4ba79830ebce Downloading [================================> ] 107.6MB/166.8MB 56aca8a42329 Extracting [==================> ] 26.74MB/71.91MB c01e672f2391 Downloading [==> ] 10.81MB/263.6MB e0a9246a993d Downloading [=============> ] 19.46MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 4ba79830ebce Downloading [=====================================> ] 124.4MB/166.8MB 56aca8a42329 Extracting [======================> ] 32.31MB/71.91MB c01e672f2391 Downloading [==> ] 14.06MB/263.6MB e0a9246a993d Downloading [=====================> ] 30.82MB/71.91MB 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 4ba79830ebce Downloading [==========================================> ] 142.7MB/166.8MB 56aca8a42329 Extracting [==========================> ] 37.88MB/71.91MB c01e672f2391 Downloading [====> ] 22.71MB/263.6MB e0a9246a993d Downloading [==============================> ] 43.25MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 4ba79830ebce Downloading [==============================================> ] 156.3MB/166.8MB 56aca8a42329 Extracting [=============================> ] 42.89MB/71.91MB c01e672f2391 Downloading [======> ] 32.44MB/263.6MB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete e0a9246a993d Downloading [======================================> ] 55.15MB/71.91MB 5179ab305f38 Downloading [==================================================>] 306B/306B 5179ab305f38 Verifying Checksum 5179ab305f38 Download complete 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 18ce86a3284e Downloading [> ] 539.6kB/182.3MB 56aca8a42329 Extracting [================================> ] 46.79MB/71.91MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB c01e672f2391 Downloading [========> ] 44.87MB/263.6MB e0a9246a993d Downloading [=================================================> ] 70.83MB/71.91MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Verifying Checksum 098efa8b34b7 Download complete 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 614e034e242f Verifying Checksum 614e034e242f Download complete 18ce86a3284e Downloading [> ] 2.702MB/182.3MB 56aca8a42329 Extracting [===================================> ] 51.25MB/71.91MB c01e672f2391 Downloading [===========> ] 60.01MB/263.6MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Download complete 18ce86a3284e Downloading [=> ] 5.406MB/182.3MB f61a19743345 Downloading [> ] 48.06kB/3.524MB 56aca8a42329 Extracting [=====================================> ] 54.03MB/71.91MB c01e672f2391 Downloading [==============> ] 74.07MB/263.6MB 4ba79830ebce Extracting [====> ] 14.48MB/166.8MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Verifying Checksum f61a19743345 Download complete e0a9246a993d Extracting [==> ] 3.899MB/71.91MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 18ce86a3284e Downloading [==> ] 8.109MB/182.3MB c01e672f2391 Downloading [================> ] 89.21MB/263.6MB 4ba79830ebce Extracting [=======> ] 23.4MB/166.8MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 56aca8a42329 Extracting [=======================================> ] 56.26MB/71.91MB e0a9246a993d Extracting [====> ] 6.685MB/71.91MB f18232174bc9 Extracting [=============================================> ] 3.342MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 8af57d8c9f49 Downloading [===> ] 687kB/8.735MB 18ce86a3284e Downloading [==> ] 10.81MB/182.3MB c01e672f2391 Downloading [===================> ] 101.1MB/263.6MB 4ba79830ebce Extracting [=========> ] 31.2MB/166.8MB f18232174bc9 Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 56aca8a42329 Extracting [=========================================> ] 59.6MB/71.91MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e0a9246a993d Extracting [=======> ] 11.14MB/71.91MB 8af57d8c9f49 Downloading [========> ] 1.473MB/8.735MB c01e672f2391 Downloading [======================> ] 118.4MB/263.6MB 18ce86a3284e Downloading [====> ] 14.6MB/182.3MB 4ba79830ebce Extracting [============> ] 40.67MB/166.8MB 56aca8a42329 Extracting [===========================================> ] 62.39MB/71.91MB e60d9caeb0b8 Pull complete f61a19743345 Extracting [> ] 65.54kB/3.524MB e0a9246a993d Extracting [=========> ] 13.93MB/71.91MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 8af57d8c9f49 Downloading [==============> ] 2.457MB/8.735MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB c01e672f2391 Downloading [=========================> ] 134.6MB/263.6MB e5d7009d9e55 Pull complete 18ce86a3284e Downloading [=====> ] 18.92MB/182.3MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 4ba79830ebce Extracting [===============> ] 52.36MB/166.8MB 56aca8a42329 Extracting [=============================================> ] 65.73MB/71.91MB f61a19743345 Extracting [=========> ] 655.4kB/3.524MB e0a9246a993d Extracting [============> ] 17.27MB/71.91MB 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete 18ce86a3284e Downloading [=======> ] 27.03MB/182.3MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB c53a11b7c6fc Verifying Checksum c53a11b7c6fc Download complete c01e672f2391 Downloading [============================> ] 148.1MB/263.6MB 4ba79830ebce Extracting [==================> ] 61.83MB/166.8MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB e032d0a5e409 Verifying Checksum e032d0a5e409 Download complete 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 56aca8a42329 Extracting [===============================================> ] 68.52MB/71.91MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB e0a9246a993d Extracting [===============> ] 21.73MB/71.91MB c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 18ce86a3284e Downloading [=========> ] 36.22MB/182.3MB 4ba79830ebce Extracting [====================> ] 67.96MB/166.8MB c01e672f2391 Downloading [==============================> ] 159MB/263.6MB 56aca8a42329 Extracting [=================================================> ] 71.3MB/71.91MB f61a19743345 Pull complete e0a9246a993d Extracting [=================> ] 25.07MB/71.91MB 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 0d92cad902ba Pull complete c49e0ee60bfb Downloading [=> ] 2.702MB/107.3MB 18ce86a3284e Downloading [=============> ] 47.58MB/182.3MB 4ba79830ebce Extracting [=======================> ] 76.87MB/166.8MB c01e672f2391 Downloading [=================================> ] 175.7MB/263.6MB d3165a332ae3 Pull complete 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB e0a9246a993d Extracting [====================> ] 28.97MB/71.91MB 18ce86a3284e Downloading [================> ] 58.93MB/182.3MB c49e0ee60bfb Downloading [===> ] 7.568MB/107.3MB 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 4ba79830ebce Extracting [========================> ] 81.89MB/166.8MB c01e672f2391 Downloading [====================================> ] 190.3MB/263.6MB 56aca8a42329 Pull complete fbe227156a9a Extracting [> ] 163.8kB/14.63MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB e0a9246a993d Extracting [======================> ] 32.31MB/71.91MB 18ce86a3284e Downloading [===================> ] 71.91MB/182.3MB c49e0ee60bfb Downloading [======> ] 12.98MB/107.3MB 4ba79830ebce Extracting [==========================> ] 86.9MB/166.8MB c01e672f2391 Downloading [======================================> ] 203.8MB/263.6MB 8af57d8c9f49 Extracting [====================> ] 3.539MB/8.735MB c124ba1a8b26 Extracting [===> ] 7.242MB/91.87MB fbe227156a9a Extracting [=> ] 327.7kB/14.63MB dcc0c3b2850c Extracting [===> ] 5.571MB/76.12MB 18ce86a3284e Downloading [======================> ] 81.64MB/182.3MB e0a9246a993d Extracting [========================> ] 35.65MB/71.91MB 4ba79830ebce Extracting [===========================> ] 90.24MB/166.8MB c49e0ee60bfb Downloading [==========> ] 22.17MB/107.3MB c01e672f2391 Downloading [========================================> ] 213MB/263.6MB 8af57d8c9f49 Extracting [=====================================> ] 6.488MB/8.735MB c124ba1a8b26 Extracting [========> ] 15.04MB/91.87MB fbe227156a9a Extracting [============> ] 3.768MB/14.63MB dcc0c3b2850c Extracting [=======> ] 11.7MB/76.12MB 18ce86a3284e Downloading [========================> ] 89.75MB/182.3MB e0a9246a993d Extracting [==========================> ] 37.88MB/71.91MB c49e0ee60bfb Downloading [==============> ] 31.9MB/107.3MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB c01e672f2391 Downloading [=========================================> ] 219.5MB/263.6MB 4ba79830ebce Extracting [===========================> ] 93.03MB/166.8MB c124ba1a8b26 Extracting [===========> ] 21.17MB/91.87MB fbe227156a9a Extracting [=================> ] 5.243MB/14.63MB 8af57d8c9f49 Pull complete dcc0c3b2850c Extracting [============> ] 18.38MB/76.12MB c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 18ce86a3284e Downloading [===========================> ] 100.6MB/182.3MB e0a9246a993d Extracting [===========================> ] 40.11MB/71.91MB c49e0ee60bfb Downloading [=====================> ] 45.42MB/107.3MB c01e672f2391 Downloading [===========================================> ] 228.7MB/263.6MB c124ba1a8b26 Extracting [==============> ] 27.3MB/91.87MB 4ba79830ebce Extracting [============================> ] 95.81MB/166.8MB fbe227156a9a Extracting [=======================> ] 6.881MB/14.63MB dcc0c3b2850c Extracting [================> ] 24.51MB/76.12MB 18ce86a3284e Downloading [==============================> ] 110.8MB/182.3MB c49e0ee60bfb Downloading [============================> ] 61.64MB/107.3MB c01e672f2391 Downloading [=============================================> ] 241.1MB/263.6MB c53a11b7c6fc Pull complete e0a9246a993d Extracting [=============================> ] 42.89MB/71.91MB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 4ba79830ebce Extracting [=============================> ] 98.6MB/166.8MB c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB dcc0c3b2850c Extracting [====================> ] 31.75MB/76.12MB fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB 18ce86a3284e Downloading [=================================> ] 122.2MB/182.3MB c01e672f2391 Downloading [===============================================> ] 252MB/263.6MB c49e0ee60bfb Downloading [================================> ] 70.29MB/107.3MB c124ba1a8b26 Extracting [=====================> ] 39.55MB/91.87MB dcc0c3b2850c Extracting [========================> ] 36.77MB/76.12MB 4ba79830ebce Extracting [==============================> ] 101.9MB/166.8MB e0a9246a993d Extracting [================================> ] 46.24MB/71.91MB fbe227156a9a Extracting [=====================================> ] 10.98MB/14.63MB 18ce86a3284e Downloading [====================================> ] 131.4MB/182.3MB c49e0ee60bfb Downloading [=====================================> ] 81.1MB/107.3MB c01e672f2391 Downloading [=================================================> ] 260.6MB/263.6MB e032d0a5e409 Pull complete c01e672f2391 Verifying Checksum c01e672f2391 Download complete dcc0c3b2850c Extracting [===========================> ] 41.78MB/76.12MB 4ba79830ebce Extracting [===============================> ] 105.3MB/166.8MB e0a9246a993d Extracting [=================================> ] 48.46MB/71.91MB c124ba1a8b26 Extracting [=========================> ] 46.79MB/91.87MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB 18ce86a3284e Downloading [======================================> ] 142.2MB/182.3MB fbe227156a9a Extracting [=========================================> ] 12.12MB/14.63MB c49e0ee60bfb Downloading [===========================================> ] 92.45MB/107.3MB fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB dcc0c3b2850c Extracting [===============================> ] 48.46MB/76.12MB 4ba79830ebce Extracting [================================> ] 107.5MB/166.8MB c124ba1a8b26 Extracting [============================> ] 52.92MB/91.87MB e0a9246a993d Extracting [===================================> ] 51.25MB/71.91MB 384497dbce3b Downloading [======> ] 8.109MB/63.48MB 18ce86a3284e Downloading [=========================================> ] 153MB/182.3MB c49e0ee60bfb Downloading [================================================> ] 104.3MB/107.3MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 055b9255fa03 Verifying Checksum 055b9255fa03 Download complete b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB b176d7edde70 Verifying Checksum dcc0c3b2850c Extracting [====================================> ] 56.26MB/76.12MB c124ba1a8b26 Extracting [=================================> ] 61.83MB/91.87MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB e0a9246a993d Extracting [=====================================> ] 54.59MB/71.91MB 384497dbce3b Downloading [==============> ] 18.92MB/63.48MB 18ce86a3284e Downloading [=============================================> ] 165.4MB/182.3MB 4ba79830ebce Extracting [=================================> ] 110.3MB/166.8MB c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB dcc0c3b2850c Extracting [=========================================> ] 63.5MB/76.12MB c124ba1a8b26 Extracting [====================================> ] 67.96MB/91.87MB 1e017ebebdbd Downloading [======> ] 4.898MB/37.19MB 384497dbce3b Downloading [======================> ] 28.65MB/63.48MB e0a9246a993d Extracting [=======================================> ] 56.26MB/71.91MB 18ce86a3284e Downloading [================================================> ] 175.7MB/182.3MB 4ba79830ebce Extracting [=================================> ] 112.5MB/166.8MB fbe227156a9a Pull complete 18ce86a3284e Verifying Checksum 18ce86a3284e Download complete c49e0ee60bfb Extracting [=> ] 2.785MB/107.3MB c124ba1a8b26 Extracting [=======================================> ] 72.97MB/91.87MB 1e017ebebdbd Downloading [==========> ] 7.912MB/37.19MB 384497dbce3b Downloading [================================> ] 41.09MB/63.48MB dcc0c3b2850c Extracting [=============================================> ] 69.07MB/76.12MB e0a9246a993d Extracting [=========================================> ] 59.6MB/71.91MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 4ba79830ebce Extracting [==================================> ] 116.4MB/166.8MB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB c49e0ee60bfb Extracting [==> ] 4.456MB/107.3MB 1e017ebebdbd Downloading [=========================> ] 18.84MB/37.19MB 384497dbce3b Downloading [==========================================> ] 53.53MB/63.48MB c124ba1a8b26 Extracting [==========================================> ] 77.99MB/91.87MB dcc0c3b2850c Extracting [================================================> ] 74.09MB/76.12MB 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB e0a9246a993d Extracting [===========================================> ] 62.95MB/71.91MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 4ba79830ebce Extracting [===================================> ] 118.7MB/166.8MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete c49e0ee60bfb Extracting [===> ] 6.685MB/107.3MB 1e017ebebdbd Downloading [=========================================> ] 30.9MB/37.19MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 82bfc142787e Downloading [> ] 97.22kB/8.613MB c124ba1a8b26 Extracting [=============================================> ] 83.56MB/91.87MB b56567b07821 Pull complete f243361b999b Extracting [==================================================>] 5.242kB/5.242kB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 55f2b468da67 Downloading [===> ] 17.84MB/257.9MB e0a9246a993d Extracting [==============================================> ] 66.29MB/71.91MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 4ba79830ebce Extracting [====================================> ] 121.4MB/166.8MB c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB 82bfc142787e Downloading [==============> ] 2.457MB/8.613MB c124ba1a8b26 Extracting [=================================================> ] 90.24MB/91.87MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 55f2b468da67 Downloading [======> ] 32.44MB/257.9MB e0a9246a993d Extracting [================================================> ] 69.63MB/71.91MB c124ba1a8b26 Pull complete f243361b999b Pull complete 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 4ba79830ebce Extracting [=====================================> ] 124.2MB/166.8MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 82bfc142787e Downloading [================================> ] 5.602MB/8.613MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB b0e0ef7895f4 Downloading [===> ] 2.26MB/37.01MB c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB 55f2b468da67 Downloading [=========> ] 46.5MB/257.9MB eb7cda286a15 Pull complete e0a9246a993d Extracting [=================================================> ] 71.3MB/71.91MB api Pulled 82bfc142787e Verifying Checksum 82bfc142787e Download complete 7abf0dc59d35 Pull complete 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 4ba79830ebce Extracting [=====================================> ] 125.9MB/166.8MB 6394804c2196 Pull complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB c49e0ee60bfb Extracting [=====> ] 12.81MB/107.3MB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB pap Pulled 55f2b468da67 Downloading [==========> ] 54.61MB/257.9MB 1e017ebebdbd Extracting [===> ] 2.753MB/37.19MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 4ba79830ebce Extracting [======================================> ] 127.6MB/166.8MB 55f2b468da67 Downloading [=============> ] 70.29MB/257.9MB b0e0ef7895f4 Downloading [=========> ] 6.782MB/37.01MB 1e017ebebdbd Extracting [=======> ] 5.898MB/37.19MB c49e0ee60bfb Extracting [=======> ] 15.04MB/107.3MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB e0a9246a993d Pull complete 5179ab305f38 Extracting [==================================================>] 306B/306B 5179ab305f38 Extracting [==================================================>] 306B/306B 991de477d40a Pull complete 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 4ba79830ebce Extracting [======================================> ] 129.8MB/166.8MB 55f2b468da67 Downloading [=================> ] 89.21MB/257.9MB b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 1e017ebebdbd Extracting [===========> ] 8.651MB/37.19MB 5179ab305f38 Pull complete c49e0ee60bfb Extracting [========> ] 17.27MB/107.3MB 4ba79830ebce Extracting [=======================================> ] 133.1MB/166.8MB 55f2b468da67 Downloading [====================> ] 108.1MB/257.9MB b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 1e017ebebdbd Extracting [==============> ] 11.01MB/37.19MB 5efc16ba9cdc Pull complete policy-db-migrator Pulled c49e0ee60bfb Extracting [========> ] 18.38MB/107.3MB 4ba79830ebce Extracting [========================================> ] 136.5MB/166.8MB 55f2b468da67 Downloading [=======================> ] 121.7MB/257.9MB b0e0ef7895f4 Downloading [======================> ] 16.96MB/37.01MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 18ce86a3284e Extracting [==> ] 10.03MB/182.3MB 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB c49e0ee60bfb Extracting [===========> ] 25.07MB/107.3MB 55f2b468da67 Downloading [==========================> ] 137.3MB/257.9MB b0e0ef7895f4 Downloading [====================================> ] 26.75MB/37.01MB 4ba79830ebce Extracting [=========================================> ] 139.8MB/166.8MB 09d5a3f70313 Downloading [========> ] 18.38MB/109.2MB 18ce86a3284e Extracting [=====> ] 18.94MB/182.3MB 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete c49e0ee60bfb Extracting [===============> ] 33.42MB/107.3MB 55f2b468da67 Downloading [=============================> ] 149.8MB/257.9MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 4ba79830ebce Extracting [===========================================> ] 144.3MB/166.8MB 09d5a3f70313 Downloading [==============> ] 31.9MB/109.2MB 18ce86a3284e Extracting [=======> ] 27.85MB/182.3MB 1e017ebebdbd Extracting [============================> ] 21.23MB/37.19MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB c49e0ee60bfb Extracting [=================> ] 37.88MB/107.3MB 55f2b468da67 Downloading [================================> ] 165.4MB/257.9MB 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 4ba79830ebce Extracting [============================================> ] 147.6MB/166.8MB 18ce86a3284e Extracting [=========> ] 32.87MB/182.3MB 2d429b9e73a6 Downloading [====> ] 2.358MB/29.13MB 1e017ebebdbd Extracting [==================================> ] 25.95MB/37.19MB 55f2b468da67 Downloading [===================================> ] 180.6MB/257.9MB c49e0ee60bfb Extracting [===================> ] 41.22MB/107.3MB 09d5a3f70313 Downloading [==========================> ] 58.39MB/109.2MB 4ba79830ebce Extracting [=============================================> ] 151.5MB/166.8MB 18ce86a3284e Extracting [===========> ] 40.67MB/182.3MB 2d429b9e73a6 Downloading [==================> ] 10.91MB/29.13MB 1e017ebebdbd Extracting [========================================> ] 29.88MB/37.19MB 55f2b468da67 Downloading [=====================================> ] 192.5MB/257.9MB c49e0ee60bfb Extracting [====================> ] 44.56MB/107.3MB 09d5a3f70313 Downloading [=================================> ] 73.53MB/109.2MB 18ce86a3284e Extracting [==============> ] 52.36MB/182.3MB 2d429b9e73a6 Downloading [=======================================> ] 23.3MB/29.13MB 4ba79830ebce Extracting [==============================================> ] 156MB/166.8MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 55f2b468da67 Downloading [========================================> ] 208.7MB/257.9MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete c49e0ee60bfb Extracting [======================> ] 47.35MB/107.3MB 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 18ce86a3284e Extracting [================> ] 60.16MB/182.3MB 4ba79830ebce Extracting [===============================================> ] 159.3MB/166.8MB 1e017ebebdbd Extracting [===============================================> ] 35.39MB/37.19MB 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 09d5a3f70313 Downloading [============================================> ] 97.86MB/109.2MB c49e0ee60bfb Extracting [=======================> ] 50.14MB/107.3MB 18ce86a3284e Extracting [==================> ] 67.4MB/182.3MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 4ba79830ebce Extracting [================================================> ] 161.5MB/166.8MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB 55f2b468da67 Downloading [============================================> ] 231.4MB/257.9MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB c49e0ee60bfb Extracting [========================> ] 52.92MB/107.3MB 18ce86a3284e Extracting [====================> ] 75.2MB/182.3MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete 531ee2cf3c0c Downloading [===============================> ] 5.16MB/8.066MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete 1e017ebebdbd Pull complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 55f2b468da67 Downloading [==============================================> ] 241.1MB/257.9MB 2d429b9e73a6 Extracting [=========> ] 5.603MB/29.13MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Download complete 4ba79830ebce Extracting [=================================================> ] 164.3MB/166.8MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete c49e0ee60bfb Extracting [==========================> ] 56.26MB/107.3MB 18ce86a3284e Extracting [======================> ] 83.56MB/182.3MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB e73cb4a42719 Downloading [====> ] 10.81MB/109.1MB 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 55f2b468da67 Downloading [=================================================> ] 253.6MB/257.9MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB 18ce86a3284e Extracting [==========================> ] 95.26MB/182.3MB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [====> ] 3.014MB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 18ce86a3284e Extracting [=============================> ] 105.8MB/182.3MB c49e0ee60bfb Extracting [=============================> ] 62.39MB/107.3MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eca0188f477e Downloading [===========> ] 8.289MB/37.17MB e73cb4a42719 Downloading [==============> ] 31.36MB/109.1MB eabd8714fec9 Downloading [> ] 3.784MB/375MB 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 18ce86a3284e Extracting [==============================> ] 110.9MB/182.3MB c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB 55f2b468da67 Extracting [=> ] 6.128MB/257.9MB 4ba79830ebce Pull complete eca0188f477e Downloading [=================> ] 13.19MB/37.17MB e73cb4a42719 Downloading [===================> ] 41.63MB/109.1MB d223479d7367 Extracting [> ] 98.3kB/6.742MB eabd8714fec9 Downloading [=> ] 7.568MB/375MB 2d429b9e73a6 Extracting [=============================> ] 17.1MB/29.13MB 18ce86a3284e Extracting [===============================> ] 115.9MB/182.3MB c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB 55f2b468da67 Extracting [===> ] 16.15MB/257.9MB eca0188f477e Downloading [==========================> ] 19.97MB/37.17MB e73cb4a42719 Downloading [=========================> ] 54.61MB/109.1MB 2d429b9e73a6 Extracting [===================================> ] 20.94MB/29.13MB eabd8714fec9 Downloading [==> ] 15.68MB/375MB d223479d7367 Extracting [==> ] 294.9kB/6.742MB 18ce86a3284e Extracting [=================================> ] 122.6MB/182.3MB c49e0ee60bfb Extracting [================================> ] 69.63MB/107.3MB 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB e73cb4a42719 Downloading [==============================> ] 67.04MB/109.1MB eca0188f477e Downloading [=========================================> ] 30.52MB/37.17MB 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB eabd8714fec9 Downloading [===> ] 25.95MB/375MB d223479d7367 Extracting [==========> ] 1.376MB/6.742MB eca0188f477e Verifying Checksum eca0188f477e Download complete c49e0ee60bfb Extracting [==================================> ] 72.97MB/107.3MB 18ce86a3284e Extracting [==================================> ] 127.6MB/182.3MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete e73cb4a42719 Downloading [====================================> ] 78.94MB/109.1MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB eabd8714fec9 Downloading [=====> ] 38.39MB/375MB d223479d7367 Extracting [================> ] 2.163MB/6.742MB 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB c49e0ee60bfb Extracting [===================================> ] 76.32MB/107.3MB 18ce86a3284e Extracting [===================================> ] 129.2MB/182.3MB e73cb4a42719 Downloading [============================================> ] 96.24MB/109.1MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 8f10199ed94b Downloading [============> ] 2.162MB/8.768MB eabd8714fec9 Downloading [======> ] 51.9MB/375MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB d223479d7367 Extracting [========================> ] 3.342MB/6.742MB c49e0ee60bfb Extracting [====================================> ] 77.43MB/107.3MB 18ce86a3284e Extracting [=====================================> ] 137MB/182.3MB e73cb4a42719 Downloading [=================================================> ] 108.1MB/109.1MB eca0188f477e Extracting [==> ] 1.966MB/37.17MB 8f10199ed94b Downloading [=========================> ] 4.423MB/8.768MB eabd8714fec9 Downloading [========> ] 65.42MB/375MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete d223479d7367 Extracting [===============================> ] 4.227MB/6.742MB 2d429b9e73a6 Extracting [==============================================> ] 26.84MB/29.13MB 18ce86a3284e Extracting [=======================================> ] 144.3MB/182.3MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 8f10199ed94b Downloading [=======================================> ] 6.978MB/8.768MB c49e0ee60bfb Extracting [====================================> ] 79.1MB/107.3MB eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB eabd8714fec9 Downloading [=========> ] 74.61MB/375MB d223479d7367 Extracting [=======================================> ] 5.308MB/6.742MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 18ce86a3284e Extracting [=========================================> ] 152.1MB/182.3MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Download complete c49e0ee60bfb Extracting [=====================================> ] 80.77MB/107.3MB eabd8714fec9 Downloading [===========> ] 83.8MB/375MB 55f2b468da67 Extracting [=======> ] 40.11MB/257.9MB f3a82e9f1761 Downloading [===> ] 3.21MB/44.41MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete eca0188f477e Extracting [========> ] 6.685MB/37.17MB d223479d7367 Extracting [==========================================> ] 5.702MB/6.742MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 18ce86a3284e Extracting [===========================================> ] 159.3MB/182.3MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB eabd8714fec9 Downloading [============> ] 96.78MB/375MB f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB 55f2b468da67 Extracting [=========> ] 46.79MB/257.9MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 18ce86a3284e Extracting [=============================================> ] 166.6MB/182.3MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB eabd8714fec9 Downloading [=============> ] 104.9MB/375MB c49e0ee60bfb Extracting [========================================> ] 86.9MB/107.3MB 55f2b468da67 Extracting [==========> ] 53.48MB/257.9MB f3a82e9f1761 Downloading [==========> ] 9.174MB/44.41MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eca0188f477e Extracting [===================> ] 14.16MB/37.17MB 18ce86a3284e Extracting [===============================================> ] 172.7MB/182.3MB eabd8714fec9 Downloading [===============> ] 119.5MB/375MB 55f2b468da67 Extracting [===========> ] 61.28MB/257.9MB c49e0ee60bfb Extracting [==========================================> ] 90.8MB/107.3MB f3a82e9f1761 Downloading [==============> ] 12.84MB/44.41MB 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB da3ed5db7103 Downloading [> ] 2.162MB/127.4MB eca0188f477e Extracting [=======================> ] 17.3MB/37.17MB eabd8714fec9 Downloading [=================> ] 131.9MB/375MB 55f2b468da67 Extracting [=============> ] 70.75MB/257.9MB f3a82e9f1761 Downloading [==================> ] 16.06MB/44.41MB c49e0ee60bfb Extracting [=============================================> ] 98.6MB/107.3MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB eca0188f477e Extracting [============================> ] 21.23MB/37.17MB eabd8714fec9 Downloading [==================> ] 139MB/375MB da3ed5db7103 Downloading [=> ] 4.324MB/127.4MB 55f2b468da67 Extracting [===============> ] 77.99MB/257.9MB f3a82e9f1761 Downloading [====================> ] 17.89MB/44.41MB c49e0ee60bfb Extracting [===============================================> ] 100.8MB/107.3MB eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB eabd8714fec9 Downloading [====================> ] 153.5MB/375MB da3ed5db7103 Downloading [======> ] 16.76MB/127.4MB 55f2b468da67 Extracting [=================> ] 88.57MB/257.9MB f3a82e9f1761 Downloading [=================================> ] 29.82MB/44.41MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB eabd8714fec9 Downloading [======================> ] 167.6MB/375MB da3ed5db7103 Downloading [============> ] 32.98MB/127.4MB 55f2b468da67 Extracting [===================> ] 100.3MB/257.9MB f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB eabd8714fec9 Downloading [========================> ] 181.7MB/375MB da3ed5db7103 Downloading [==================> ] 46.5MB/127.4MB c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 55f2b468da67 Extracting [====================> ] 106.4MB/257.9MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB eabd8714fec9 Downloading [==========================> ] 196.8MB/375MB da3ed5db7103 Downloading [========================> ] 61.64MB/127.4MB eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB 6ac0e4adf315 Downloading [======> ] 8.109MB/62.07MB eabd8714fec9 Downloading [============================> ] 211.4MB/375MB da3ed5db7103 Downloading [=============================> ] 75.15MB/127.4MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 6ac0e4adf315 Downloading [==============> ] 18.38MB/62.07MB eabd8714fec9 Downloading [==============================> ] 229.2MB/375MB da3ed5db7103 Downloading [====================================> ] 92.99MB/127.4MB 55f2b468da67 Extracting [=======================> ] 118.7MB/257.9MB da3ed5db7103 Downloading [==========================================> ] 109.2MB/127.4MB eabd8714fec9 Downloading [================================> ] 245.5MB/375MB 6ac0e4adf315 Downloading [=========================> ] 31.36MB/62.07MB 55f2b468da67 Extracting [=======================> ] 119.8MB/257.9MB eabd8714fec9 Downloading [==================================> ] 262.2MB/375MB da3ed5db7103 Downloading [=================================================> ] 124.9MB/127.4MB 6ac0e4adf315 Downloading [=======================================> ] 48.66MB/62.07MB 55f2b468da67 Extracting [========================> ] 125.3MB/257.9MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete f3b09c502777 Downloading [> ] 539.6kB/56.52MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete eabd8714fec9 Downloading [=====================================> ] 278.4MB/375MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f3b09c502777 Downloading [========> ] 9.19MB/56.52MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Download complete 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete eabd8714fec9 Downloading [=======================================> ] 294.1MB/375MB 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB f3b09c502777 Downloading [=====================> ] 24.33MB/56.52MB eabd8714fec9 Downloading [=========================================> ] 307.6MB/375MB 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB f3b09c502777 Downloading [=================================> ] 37.85MB/56.52MB eabd8714fec9 Downloading [===========================================> ] 323.3MB/375MB 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB f3b09c502777 Downloading [===============================================> ] 53.53MB/56.52MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 2d429b9e73a6 Pull complete eabd8714fec9 Downloading [=============================================> ] 338.5MB/375MB 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB eabd8714fec9 Downloading [===============================================> ] 355.8MB/375MB 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB eabd8714fec9 Downloading [=================================================> ] 368.2MB/375MB 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB d223479d7367 Pull complete 18ce86a3284e Pull complete eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB 9fa9226be034 Pull complete eca0188f477e Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB c49e0ee60bfb Pull complete 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB ece604b40811 Extracting [==================================================>] 303B/303B ece604b40811 Extracting [==================================================>] 303B/303B 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB 55f2b468da67 Extracting [====================================> ] 188.8MB/257.9MB 55f2b468da67 Extracting [=====================================> ] 192.7MB/257.9MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 55f2b468da67 Extracting [========================================> ] 208.9MB/257.9MB 46eab5b44a35 Pull complete 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 55f2b468da67 Extracting [===========================================> ] 225.1MB/257.9MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB ece604b40811 Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB c4d302cc468d Extracting [> ] 65.54kB/4.534MB 1617e25568b2 Extracting [========================================> ] 393.2kB/480.9kB 098efa8b34b7 Pull complete 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB e444bcd4d577 Pull complete 55f2b468da67 Pull complete 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 384497dbce3b Extracting [> ] 557.1kB/63.48MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB c4d302cc468d Extracting [=======================================> ] 3.604MB/4.534MB 614e034e242f Pull complete c01e672f2391 Extracting [> ] 557.1kB/263.6MB simulator Pulled c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 82bfc142787e Extracting [=======> ] 1.376MB/8.613MB 1617e25568b2 Pull complete 384497dbce3b Extracting [> ] 1.114MB/63.48MB c4d302cc468d Pull complete eabd8714fec9 Extracting [=> ] 9.47MB/375MB c01e672f2391 Extracting [> ] 1.114MB/263.6MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 82bfc142787e Extracting [=======================================> ] 6.783MB/8.613MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [==> ] 17.27MB/375MB c01e672f2391 Extracting [=> ] 7.799MB/263.6MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB 01e0882c90d9 Pull complete c01e672f2391 Extracting [===> ] 17.27MB/263.6MB eabd8714fec9 Extracting [==> ] 21.17MB/375MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 46baca71a4ef Pull complete c01e672f2391 Extracting [====> ] 26.18MB/263.6MB 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 384497dbce3b Extracting [==> ] 3.342MB/63.48MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB c01e672f2391 Extracting [=====> ] 31.2MB/263.6MB 531ee2cf3c0c Extracting [===================> ] 3.146MB/8.066MB 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB eabd8714fec9 Extracting [====> ] 30.08MB/375MB b0e0ef7895f4 Extracting [=====> ] 3.932MB/37.01MB 531ee2cf3c0c Extracting [====================> ] 3.342MB/8.066MB c01e672f2391 Extracting [======> ] 32.31MB/263.6MB 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB eabd8714fec9 Extracting [======> ] 45.68MB/375MB 531ee2cf3c0c Extracting [===============================> ] 5.014MB/8.066MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB b0e0ef7895f4 Extracting [==============> ] 10.62MB/37.01MB c01e672f2391 Extracting [=======> ] 40.11MB/263.6MB eabd8714fec9 Extracting [=======> ] 52.92MB/375MB 6ac0e4adf315 Extracting [============> ] 15.04MB/62.07MB 531ee2cf3c0c Extracting [=======================================> ] 6.39MB/8.066MB b0e0ef7895f4 Extracting [=====================> ] 16.12MB/37.01MB c01e672f2391 Extracting [========> ] 44.01MB/263.6MB eabd8714fec9 Extracting [========> ] 60.16MB/375MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 6ac0e4adf315 Extracting [=============> ] 17.27MB/62.07MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB c01e672f2391 Extracting [=========> ] 50.69MB/263.6MB b0e0ef7895f4 Extracting [===================================> ] 25.95MB/37.01MB eabd8714fec9 Extracting [========> ] 65.73MB/375MB 6ac0e4adf315 Extracting [=================> ] 22.28MB/62.07MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB c01e672f2391 Extracting [===========> ] 59.05MB/263.6MB b0e0ef7895f4 Extracting [===============================================> ] 35.39MB/37.01MB eabd8714fec9 Extracting [=========> ] 74.09MB/375MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [==========> ] 77.43MB/375MB c01e672f2391 Extracting [============> ] 63.5MB/263.6MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 384497dbce3b Extracting [=======> ] 8.913MB/63.48MB eabd8714fec9 Extracting [===========> ] 89.13MB/375MB c01e672f2391 Extracting [=============> ] 70.19MB/263.6MB 6ac0e4adf315 Extracting [=====================> ] 27.3MB/62.07MB 384497dbce3b Extracting [========> ] 10.58MB/63.48MB eabd8714fec9 Extracting [=============> ] 99.16MB/375MB c01e672f2391 Extracting [==============> ] 77.43MB/263.6MB 6ac0e4adf315 Extracting [=========================> ] 31.2MB/62.07MB 384497dbce3b Extracting [==========> ] 12.81MB/63.48MB c01e672f2391 Extracting [================> ] 86.34MB/263.6MB eabd8714fec9 Extracting [==============> ] 107.5MB/375MB 6ac0e4adf315 Extracting [===============================> ] 39.55MB/62.07MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB c01e672f2391 Extracting [==================> ] 96.37MB/263.6MB eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 6ac0e4adf315 Extracting [==========================================> ] 52.92MB/62.07MB 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB c01e672f2391 Extracting [====================> ] 107MB/263.6MB eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 384497dbce3b Extracting [================> ] 21.17MB/63.48MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB c01e672f2391 Extracting [=====================> ] 114.2MB/263.6MB eabd8714fec9 Extracting [================> ] 120.3MB/375MB 384497dbce3b Extracting [=================> ] 22.28MB/63.48MB c01e672f2391 Extracting [======================> ] 118.7MB/263.6MB eabd8714fec9 Extracting [================> ] 124.8MB/375MB 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB c01e672f2391 Extracting [========================> ] 129.8MB/263.6MB eabd8714fec9 Extracting [=================> ] 130.4MB/375MB c01e672f2391 Extracting [===========================> ] 143.2MB/263.6MB eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB c01e672f2391 Extracting [=============================> ] 154.9MB/263.6MB eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 384497dbce3b Extracting [========================> ] 31.2MB/63.48MB c01e672f2391 Extracting [===============================> ] 166.6MB/263.6MB eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 384497dbce3b Extracting [==========================> ] 33.42MB/63.48MB c01e672f2391 Extracting [=================================> ] 178.3MB/263.6MB eabd8714fec9 Extracting [===================> ] 145.9MB/375MB c01e672f2391 Extracting [==================================> ] 183.8MB/263.6MB 384497dbce3b Extracting [============================> ] 35.65MB/63.48MB eabd8714fec9 Extracting [===================> ] 147.1MB/375MB c01e672f2391 Extracting [====================================> ] 193.9MB/263.6MB eabd8714fec9 Extracting [====================> ] 150.4MB/375MB 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB c01e672f2391 Extracting [======================================> ] 203.9MB/263.6MB eabd8714fec9 Extracting [====================> ] 155.4MB/375MB 384497dbce3b Extracting [================================> ] 41.78MB/63.48MB c01e672f2391 Extracting [========================================> ] 213.4MB/263.6MB eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 384497dbce3b Extracting [===================================> ] 44.56MB/63.48MB c01e672f2391 Extracting [==========================================> ] 226.2MB/263.6MB eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 531ee2cf3c0c Pull complete 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB c01e672f2391 Extracting [============================================> ] 232.8MB/263.6MB eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 384497dbce3b Extracting [=======================================> ] 50.14MB/63.48MB eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB eabd8714fec9 Extracting [=========================> ] 191.6MB/375MB c01e672f2391 Extracting [==============================================> ] 242.9MB/263.6MB 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB eabd8714fec9 Extracting [==========================> ] 202.2MB/375MB c01e672f2391 Extracting [===============================================> ] 249MB/263.6MB 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB eabd8714fec9 Extracting [============================> ] 210.6MB/375MB c01e672f2391 Extracting [=================================================> ] 261.8MB/263.6MB b0e0ef7895f4 Pull complete c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB 384497dbce3b Extracting [=============================================> ] 57.93MB/63.48MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 384497dbce3b Extracting [==============================================> ] 58.49MB/63.48MB 6ac0e4adf315 Pull complete eabd8714fec9 Extracting [============================> ] 217.3MB/375MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [=============================> ] 222.8MB/375MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB eabd8714fec9 Extracting [===============================> ] 232.8MB/375MB eabd8714fec9 Extracting [===============================> ] 239MB/375MB eabd8714fec9 Extracting [================================> ] 245.1MB/375MB eabd8714fec9 Extracting [=================================> ] 250.1MB/375MB eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB eabd8714fec9 Extracting [==================================> ] 259.6MB/375MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB eabd8714fec9 Extracting [===================================> ] 265.2MB/375MB f3b09c502777 Extracting [====> ] 5.571MB/56.52MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB f3b09c502777 Extracting [========> ] 10.03MB/56.52MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB f3b09c502777 Extracting [===========> ] 13.37MB/56.52MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB ed54a7dee1d8 Pull complete c01e672f2391 Pull complete eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB f3b09c502777 Extracting [============================> ] 31.75MB/56.52MB eabd8714fec9 Extracting [=====================================> ] 282.4MB/375MB f3b09c502777 Extracting [=========================================> ] 46.79MB/56.52MB eabd8714fec9 Extracting [======================================> ] 286.9MB/375MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 293MB/375MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 384497dbce3b Pull complete c0c90eeb8aca Pull complete eabd8714fec9 Extracting [========================================> ] 303MB/375MB eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB eabd8714fec9 Extracting [==========================================> ] 318.1MB/375MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB f3b09c502777 Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B apex-pdp Pulled 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 055b9255fa03 Pull complete 5cfb27c10ea5 Pull complete b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB e27c75a98748 Pull complete eabd8714fec9 Extracting [=============================================> ] 342MB/375MB b176d7edde70 Pull complete 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B grafana Pulled 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB e040ea11fa10 Pull complete eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB bf70c5107ab5 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB e73cb4a42719 Extracting [====> ] 8.913MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 09d5a3f70313 Extracting [====> ] 9.47MB/109.2MB 1ccde423731d Pull complete e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 09d5a3f70313 Extracting [========> ] 18.38MB/109.2MB e73cb4a42719 Extracting [=======> ] 15.6MB/109.1MB eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 09d5a3f70313 Extracting [===========> ] 26.18MB/109.2MB e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 09d5a3f70313 Extracting [==================> ] 39.55MB/109.2MB 7df673c7455d Pull complete e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB eabd8714fec9 Extracting [===============================================> ] 355.4MB/375MB prometheus Pulled 09d5a3f70313 Extracting [======================> ] 48.46MB/109.2MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 09d5a3f70313 Extracting [===========================> ] 60.72MB/109.2MB e73cb4a42719 Extracting [==============> ] 31.75MB/109.1MB eabd8714fec9 Extracting [================================================> ] 361MB/375MB 09d5a3f70313 Extracting [================================> ] 71.86MB/109.2MB e73cb4a42719 Extracting [=================> ] 37.88MB/109.1MB eabd8714fec9 Extracting [================================================> ] 366.5MB/375MB 09d5a3f70313 Extracting [=======================================> ] 86.9MB/109.2MB e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 09d5a3f70313 Extracting [============================================> ] 97.48MB/109.2MB eabd8714fec9 Extracting [=================================================> ] 371.6MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 356f5c2c843b Pull complete e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB kafka Pulled 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 8f10199ed94b Extracting [======================> ] 4.03MB/8.768MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB e73cb4a42719 Extracting [====================================> ] 79.1MB/109.1MB f963a77d2726 Pull complete e73cb4a42719 Extracting [=======================================> ] 86.9MB/109.1MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB f3a82e9f1761 Extracting [===============> ] 13.76MB/44.41MB e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB f3a82e9f1761 Extracting [===========================> ] 24.77MB/44.41MB f3a82e9f1761 Extracting [=============================================> ] 40.37MB/44.41MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB e73cb4a42719 Pull complete 2e8a7df9c2ee Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B a83b68436f09 Pull complete 10f05dd8b1db Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B da3ed5db7103 Extracting [====> ] 10.58MB/127.4MB da3ed5db7103 Extracting [========> ] 22.84MB/127.4MB 7e568a0dc8fb Pull complete postgres Pulled da3ed5db7103 Extracting [==============> ] 36.21MB/127.4MB da3ed5db7103 Extracting [====================> ] 51.81MB/127.4MB da3ed5db7103 Extracting [==========================> ] 66.29MB/127.4MB da3ed5db7103 Extracting [================================> ] 82.44MB/127.4MB da3ed5db7103 Extracting [=====================================> ] 96.37MB/127.4MB da3ed5db7103 Extracting [===========================================> ] 111.4MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 120.3MB/127.4MB da3ed5db7103 Extracting [================================================> ] 124.8MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container simulator Creating Container prometheus Creating Container postgres Creating Container zookeeper Creating Container postgres Created Container policy-db-migrator Creating Container simulator Created Container prometheus Created Container grafana Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container kafka Created Container policy-api Creating Container grafana Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container zookeeper Starting Container postgres Starting Container simulator Starting Container prometheus Starting Container postgres Started Container policy-db-migrator Starting Container simulator Started Container zookeeper Started Container kafka Starting Container policy-db-migrator Started Container policy-api Starting Container kafka Started Container prometheus Started Container grafana Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for policy-pap to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute Checking if REST port 30001 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute Cloning into '/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/models'... Building robot framework docker image sha256:51bab04da2b02f28adf34f270c94745666211e8b9c959b1d727f57d08e69510c top - 23:14:50 up 4 min, 0 users, load average: 2.35, 1.92, 0.85 Tasks: 232 total, 1 running, 155 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.4 us, 3.5 sy, 0.0 ni, 77.8 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.7G 20G 28M 8.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 149294c4c873 policy-apex-pdp 0.70% 221.6MiB / 31.41GiB 0.69% 49.8kB / 63.8kB 0B / 0B 52 705c561de1b8 policy-pap 1.18% 534.3MiB / 31.41GiB 1.66% 132kB / 220kB 0B / 139MB 68 0b9208b93370 policy-api 0.10% 432.7MiB / 31.41GiB 1.35% 1.15MB / 1.02MB 0B / 0B 57 c35f2d5ddf22 kafka 1.51% 391.9MiB / 31.41GiB 1.22% 206kB / 185kB 0B / 590kB 83 c1e40e4c0129 grafana 0.14% 109.1MiB / 31.41GiB 0.34% 19.1MB / 194kB 0B / 31.5MB 21 fed4460352c9 zookeeper 0.08% 84.66MiB / 31.41GiB 0.26% 53.3kB / 47.4kB 0B / 401kB 62 c456f201b6c5 prometheus 0.01% 21.16MiB / 31.41GiB 0.07% 132kB / 5.44kB 98.3kB / 0B 13 9a6edbfd3a53 postgres 0.00% 85.12MiB / 31.41GiB 0.26% 1.67MB / 1.73MB 4.1kB / 158MB 26 69aaa4c20162 simulator 0.07% 123.9MiB / 31.41GiB 0.39% 1.68kB / 0B 127kB / 0B 64 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-13T23:12:49.538596747Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-13T23:12:49Z grafana | logger=settings t=2025-06-13T23:12:49.539029958Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-13T23:12:49.539044739Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-13T23:12:49.539050569Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-13T23:12:49.53905488Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-13T23:12:49.53905897Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T23:12:49.53906339Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T23:12:49.53906798Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-13T23:12:49.53907194Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-13T23:12:49.539101062Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-13T23:12:49.539106742Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T23:12:49.539112382Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T23:12:49.539116822Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-13T23:12:49.539134433Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-13T23:12:49.539139494Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-13T23:12:49.539143674Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-13T23:12:49.539148484Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-13T23:12:49.539181426Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-13T23:12:49.539190216Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-13T23:12:49.539665019Z level=info msg=FeatureToggles formatString=true dashboardSceneForViewers=true panelMonitoring=true externalCorePlugins=true dashboardScene=true preinstallAutoUpdate=true alertingUIOptimizeReducer=true promQLScope=true influxdbBackendMigration=true alertingQueryAndExpressionsStepMode=true alertingNotificationsStepMode=true dataplaneFrontendFallback=true cloudWatchCrossAccountQuerying=true pluginsDetailsRightPanel=true reportingUseRawTimeRange=true failWrongDSUID=true correlations=true awsAsyncQueryCaching=true unifiedStorageSearchPermissionFiltering=true dashboardSceneSolo=true alertingSimplifiedRouting=true annotationPermissionUpdate=true lokiQuerySplitting=true kubernetesPlaylists=true useSessionStorageForRedirection=true prometheusUsesCombobox=true logsContextDatasourceUi=true logsExploreTableVisualisation=true dashgpt=true alertRuleRestore=true logsInfiniteScrolling=true transformationsRedesign=true lokiQueryHints=true onPremToCloudMigrations=true ssoSettingsSAML=true alertingApiServer=true prometheusAzureOverrideAudience=true alertingRuleRecoverDeleted=true newDashboardSharingComponent=true logsPanelControls=true cloudWatchRoundUpEndTime=true nestedFolders=true recoveryThreshold=true kubernetesClientDashboardsFolders=true newFiltersUI=true grafanaconThemes=true tlsMemcached=true addFieldFromCalculationStatFunctions=true cloudWatchNewLabelParsing=true alertingInsights=true logRowsPopoverMenu=true publicDashboardsScene=true alertingRuleVersionHistoryRestore=true groupToNestedTableTransformation=true lokiLabelNamesQueryApi=true recordedQueriesMulti=true unifiedRequestLog=true angularDeprecationUI=true newPDFRendering=true pinNavItems=true alertingRulePermanentlyDelete=true azureMonitorPrometheusExemplars=true lokiStructuredMetadata=true ssoSettingsApi=true azureMonitorEnableUserAuth=true grafana | logger=sqlstore t=2025-06-13T23:12:49.539731312Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-13T23:12:49.539770384Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-13T23:12:49.541725128Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-13T23:12:49.541741199Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-13T23:12:49.542474374Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-13T23:12:49.543535645Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.053671ms grafana | logger=migrator t=2025-06-13T23:12:49.547535458Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-13T23:12:49.548128236Z level=info msg="Migration successfully executed" id="create user table" duration=591.878µs grafana | logger=migrator t=2025-06-13T23:12:49.553943307Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-13T23:12:49.554737395Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=793.439µs grafana | logger=migrator t=2025-06-13T23:12:49.558278165Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-13T23:12:49.559059483Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=778.358µs grafana | logger=migrator t=2025-06-13T23:12:49.562562932Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-13T23:12:49.563395002Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=831.75µs grafana | logger=migrator t=2025-06-13T23:12:49.568688327Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-13T23:12:49.569510776Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=821.819µs grafana | logger=migrator t=2025-06-13T23:12:49.574561109Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-13T23:12:49.578387774Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.827355ms grafana | logger=migrator t=2025-06-13T23:12:49.583792314Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-13T23:12:49.584791052Z level=info msg="Migration successfully executed" id="create user table v2" duration=995.458µs grafana | logger=migrator t=2025-06-13T23:12:49.591520186Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-13T23:12:49.592339086Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=817.219µs grafana | logger=migrator t=2025-06-13T23:12:49.597687803Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-13T23:12:49.59846002Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=772.177µs grafana | logger=migrator t=2025-06-13T23:12:49.603532145Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:49.604179216Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=649.912µs grafana | logger=migrator t=2025-06-13T23:12:49.609510343Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-13T23:12:49.610534502Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.02361ms grafana | logger=migrator t=2025-06-13T23:12:49.614113104Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-13T23:12:49.616078319Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.964475ms grafana | logger=migrator t=2025-06-13T23:12:49.620634788Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-13T23:12:49.620757414Z level=info msg="Migration successfully executed" id="Update user table charset" duration=79.274µs grafana | logger=migrator t=2025-06-13T23:12:49.626057689Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-13T23:12:49.627949771Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.890021ms grafana | logger=migrator t=2025-06-13T23:12:49.631550884Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-13T23:12:49.632128872Z level=info msg="Migration successfully executed" id="Add missing user data" duration=577.428µs grafana | logger=migrator t=2025-06-13T23:12:49.636546725Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-13T23:12:49.637750182Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.198628ms grafana | logger=migrator t=2025-06-13T23:12:49.642462289Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-13T23:12:49.643281629Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=818.69µs grafana | logger=migrator t=2025-06-13T23:12:49.647984895Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-13T23:12:49.649280148Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.296183ms grafana | logger=migrator t=2025-06-13T23:12:49.652397108Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-13T23:12:49.663101943Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.704715ms grafana | logger=migrator t=2025-06-13T23:12:49.667765968Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-13T23:12:49.668656621Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=890.243µs grafana | logger=migrator t=2025-06-13T23:12:49.673383999Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-13T23:12:49.673702774Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=318.056µs grafana | logger=migrator t=2025-06-13T23:12:49.678483674Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-13T23:12:49.679299353Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=815.149µs grafana | logger=migrator t=2025-06-13T23:12:49.682501368Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-13T23:12:49.683742527Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.239369ms grafana | logger=migrator t=2025-06-13T23:12:49.687041226Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-13T23:12:49.687457146Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=415.43µs grafana | logger=migrator t=2025-06-13T23:12:49.691881569Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-13T23:12:49.693900157Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=2.017817ms grafana | logger=migrator t=2025-06-13T23:12:49.700687873Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-13T23:12:49.701523484Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=844.701µs grafana | logger=migrator t=2025-06-13T23:12:49.704699457Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-13T23:12:49.705036783Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=337.756µs grafana | logger=migrator t=2025-06-13T23:12:49.708039548Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-13T23:12:49.708778963Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=739.026µs grafana | logger=migrator t=2025-06-13T23:12:49.713333212Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-13T23:12:49.71390293Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=569.238µs grafana | logger=migrator t=2025-06-13T23:12:49.716781148Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-13T23:12:49.717339865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=558.437µs grafana | logger=migrator t=2025-06-13T23:12:49.720107429Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-13T23:12:49.720729859Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=621.8µs grafana | logger=migrator t=2025-06-13T23:12:49.726112888Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-13T23:12:49.726706246Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=593.338µs grafana | logger=migrator t=2025-06-13T23:12:49.732548078Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-13T23:12:49.732606521Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=61.343µs grafana | logger=migrator t=2025-06-13T23:12:49.735987703Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-13T23:12:49.737505577Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.516463ms grafana | logger=migrator t=2025-06-13T23:12:49.740912841Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-13T23:12:49.742239785Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.325594ms grafana | logger=migrator t=2025-06-13T23:12:49.749078684Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-13T23:12:49.749812849Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=729.165µs grafana | logger=migrator t=2025-06-13T23:12:49.753388791Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-13T23:12:49.754521336Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.132075ms grafana | logger=migrator t=2025-06-13T23:12:49.758107999Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:49.762111522Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.003542ms grafana | logger=migrator t=2025-06-13T23:12:49.768587973Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-13T23:12:49.77039565Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.806897ms grafana | logger=migrator t=2025-06-13T23:12:49.774734699Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-13T23:12:49.775630533Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=896.783µs grafana | logger=migrator t=2025-06-13T23:12:49.779033516Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-13T23:12:49.779743431Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=710.545µs grafana | logger=migrator t=2025-06-13T23:12:49.782682492Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-13T23:12:49.783372495Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=689.833µs grafana | logger=migrator t=2025-06-13T23:12:49.786206502Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-13T23:12:49.786858353Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=648.371µs grafana | logger=migrator t=2025-06-13T23:12:49.791215343Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:49.791638343Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=419.88µs grafana | logger=migrator t=2025-06-13T23:12:49.795404475Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:49.796035575Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=580.248µs grafana | logger=migrator t=2025-06-13T23:12:49.799731193Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-13T23:12:49.800127542Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=396.149µs grafana | logger=migrator t=2025-06-13T23:12:49.804871381Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-13T23:12:49.805420577Z level=info msg="Migration successfully executed" id="create star table" duration=548.416µs grafana | logger=migrator t=2025-06-13T23:12:49.808216962Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-13T23:12:49.808919736Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=702.184µs grafana | logger=migrator t=2025-06-13T23:12:49.811845527Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-13T23:12:49.813014023Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.167336ms grafana | logger=migrator t=2025-06-13T23:12:49.816166595Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-13T23:12:49.817295199Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.127604ms grafana | logger=migrator t=2025-06-13T23:12:49.823229665Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-13T23:12:49.824443123Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.212498ms grafana | logger=migrator t=2025-06-13T23:12:49.829222263Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-13T23:12:49.829822972Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=600.299µs grafana | logger=migrator t=2025-06-13T23:12:49.832861359Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-13T23:12:49.83350625Z level=info msg="Migration successfully executed" id="create org table v1" duration=644.161µs grafana | logger=migrator t=2025-06-13T23:12:49.83641825Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-13T23:12:49.837109123Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=691.063µs grafana | logger=migrator t=2025-06-13T23:12:49.84182081Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-13T23:12:49.84244521Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=623.59µs grafana | logger=migrator t=2025-06-13T23:12:49.845312098Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-13T23:12:49.845953009Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=640.021µs grafana | logger=migrator t=2025-06-13T23:12:49.84991241Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-13T23:12:49.850617354Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=704.054µs grafana | logger=migrator t=2025-06-13T23:12:49.853596777Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-13T23:12:49.854293191Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=695.534µs grafana | logger=migrator t=2025-06-13T23:12:49.859211768Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-13T23:12:49.859233289Z level=info msg="Migration successfully executed" id="Update org table charset" duration=22.471µs grafana | logger=migrator t=2025-06-13T23:12:49.862004942Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-13T23:12:49.862031914Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.151µs grafana | logger=migrator t=2025-06-13T23:12:49.864318854Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-13T23:12:49.864498262Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=178.128µs grafana | logger=migrator t=2025-06-13T23:12:49.867300907Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-13T23:12:49.867878285Z level=info msg="Migration successfully executed" id="create dashboard table" duration=576.928µs grafana | logger=migrator t=2025-06-13T23:12:49.872499438Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-13T23:12:49.873127508Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=627.24µs grafana | logger=migrator t=2025-06-13T23:12:49.877546741Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-13T23:12:49.878191852Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=646.722µs grafana | logger=migrator t=2025-06-13T23:12:49.881615477Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-13T23:12:49.8821085Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=492.333µs grafana | logger=migrator t=2025-06-13T23:12:49.887137863Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-13T23:12:49.88791978Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=781.008µs grafana | logger=migrator t=2025-06-13T23:12:49.890974077Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-13T23:12:49.891499423Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=525.766µs grafana | logger=migrator t=2025-06-13T23:12:49.89435405Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-13T23:12:49.898037387Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=3.684107ms grafana | logger=migrator t=2025-06-13T23:12:49.903833187Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-13T23:12:49.904378243Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=544.556µs grafana | logger=migrator t=2025-06-13T23:12:49.907088753Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-13T23:12:49.90763961Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=551.217µs grafana | logger=migrator t=2025-06-13T23:12:49.910677666Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-13T23:12:49.911356819Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=678.243µs grafana | logger=migrator t=2025-06-13T23:12:49.917182109Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:49.917516186Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=332.956µs grafana | logger=migrator t=2025-06-13T23:12:49.920539731Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-13T23:12:49.921208983Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=668.512µs grafana | logger=migrator t=2025-06-13T23:12:49.924205838Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T23:12:49.924218558Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=13.06µs grafana | logger=migrator t=2025-06-13T23:12:49.929616098Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T23:12:49.931005805Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.388917ms grafana | logger=migrator t=2025-06-13T23:12:49.934956085Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T23:12:49.937169112Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.212107ms grafana | logger=migrator t=2025-06-13T23:12:49.940076312Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.941464939Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.387647ms grafana | logger=migrator t=2025-06-13T23:12:49.94606041Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.946702261Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=640.951µs grafana | logger=migrator t=2025-06-13T23:12:49.950465802Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.95187923Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.412878ms grafana | logger=migrator t=2025-06-13T23:12:49.954699896Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.955281094Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=577.878µs grafana | logger=migrator t=2025-06-13T23:12:49.960723676Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T23:12:49.961294724Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=570.328µs grafana | logger=migrator t=2025-06-13T23:12:49.964176563Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-13T23:12:49.96433526Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=158.057µs grafana | logger=migrator t=2025-06-13T23:12:49.967893652Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-13T23:12:49.967914343Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=20.521µs grafana | logger=migrator t=2025-06-13T23:12:49.970846844Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.972329145Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.482261ms grafana | logger=migrator t=2025-06-13T23:12:49.976645103Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.978141025Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.494372ms grafana | logger=migrator t=2025-06-13T23:12:49.981250045Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.982737386Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.485641ms grafana | logger=migrator t=2025-06-13T23:12:49.985641976Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.987139698Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.496242ms grafana | logger=migrator t=2025-06-13T23:12:49.992638533Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-13T23:12:49.993334017Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=698.204µs grafana | logger=migrator t=2025-06-13T23:12:49.997226844Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:49.998617831Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.396007ms grafana | logger=migrator t=2025-06-13T23:12:50.003938511Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-13T23:12:50.004851725Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=913.074µs grafana | logger=migrator t=2025-06-13T23:12:50.009555784Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-13T23:12:50.009623917Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=72.293µs grafana | logger=migrator t=2025-06-13T23:12:50.014463373Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-13T23:12:50.016007188Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.542555ms grafana | logger=migrator t=2025-06-13T23:12:50.021679174Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-13T23:12:50.022543126Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=863.522µs grafana | logger=migrator t=2025-06-13T23:12:50.025769673Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:50.030547475Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.779382ms grafana | logger=migrator t=2025-06-13T23:12:50.035421882Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-13T23:12:50.03598684Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=564.377µs grafana | logger=migrator t=2025-06-13T23:12:50.039026067Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-13T23:12:50.04010788Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.079913ms grafana | logger=migrator t=2025-06-13T23:12:50.044339156Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-13T23:12:50.045713833Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.374317ms grafana | logger=migrator t=2025-06-13T23:12:50.051450992Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:50.051786908Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=335.766µs grafana | logger=migrator t=2025-06-13T23:12:50.054817555Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:50.055391103Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=568.408µs grafana | logger=migrator t=2025-06-13T23:12:50.058589269Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-13T23:12:50.061956863Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.366643ms grafana | logger=migrator t=2025-06-13T23:12:50.068449548Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-13T23:12:50.069282939Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=832.641µs grafana | logger=migrator t=2025-06-13T23:12:50.07219108Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-13T23:12:50.07238015Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=189.929µs grafana | logger=migrator t=2025-06-13T23:12:50.075534603Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-13T23:12:50.075842648Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=312.535µs grafana | logger=migrator t=2025-06-13T23:12:50.079118177Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-13T23:12:50.080382229Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.263432ms grafana | logger=migrator t=2025-06-13T23:12:50.090856468Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-13T23:12:50.093830643Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.976085ms grafana | logger=migrator t=2025-06-13T23:12:50.097040149Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-13T23:12:50.099588543Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.550134ms grafana | logger=migrator t=2025-06-13T23:12:50.102632371Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-13T23:12:50.103311644Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=678.913µs grafana | logger=migrator t=2025-06-13T23:12:50.106207015Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-13T23:12:50.107943109Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=1.736054ms grafana | logger=migrator t=2025-06-13T23:12:50.112160774Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T23:12:50.114693778Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.532283ms grafana | logger=migrator t=2025-06-13T23:12:50.11885873Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-13T23:12:50.119392696Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=533.056µs grafana | logger=migrator t=2025-06-13T23:12:50.122230724Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-13T23:12:50.124881863Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.647319ms grafana | logger=migrator t=2025-06-13T23:12:50.129638874Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-13T23:12:50.13058392Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=945.156µs grafana | logger=migrator t=2025-06-13T23:12:50.133616668Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-13T23:12:50.134143923Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=526.185µs grafana | logger=migrator t=2025-06-13T23:12:50.137278216Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-13T23:12:50.138259814Z level=info msg="Migration successfully executed" id="create data_source table" duration=977.757µs grafana | logger=migrator t=2025-06-13T23:12:50.143679087Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-13T23:12:50.145030133Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.348586ms grafana | logger=migrator t=2025-06-13T23:12:50.14888159Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-13T23:12:50.15031236Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.43046ms grafana | logger=migrator t=2025-06-13T23:12:50.153522456Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-13T23:12:50.154308654Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=785.628µs grafana | logger=migrator t=2025-06-13T23:12:50.157440907Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-13T23:12:50.158201584Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.827µs grafana | logger=migrator t=2025-06-13T23:12:50.162747835Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-13T23:12:50.17212049Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.373066ms grafana | logger=migrator t=2025-06-13T23:12:50.175764968Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-13T23:12:50.176695983Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=930.365µs grafana | logger=migrator t=2025-06-13T23:12:50.181246414Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-13T23:12:50.182077245Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=830.701µs grafana | logger=migrator t=2025-06-13T23:12:50.185283861Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-13T23:12:50.186151603Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=867.782µs grafana | logger=migrator t=2025-06-13T23:12:50.18918065Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-13T23:12:50.189761688Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=580.378µs grafana | logger=migrator t=2025-06-13T23:12:50.195092258Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-13T23:12:50.199014908Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.91687ms grafana | logger=migrator t=2025-06-13T23:12:50.2025349Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-13T23:12:50.205980557Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.447968ms grafana | logger=migrator t=2025-06-13T23:12:50.212525256Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-13T23:12:50.212551117Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.012µs grafana | logger=migrator t=2025-06-13T23:12:50.218525977Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-13T23:12:50.21878729Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=260.783µs grafana | logger=migrator t=2025-06-13T23:12:50.221134424Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-13T23:12:50.223595554Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.46314ms grafana | logger=migrator t=2025-06-13T23:12:50.226550918Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-13T23:12:50.226772118Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=220.64µs grafana | logger=migrator t=2025-06-13T23:12:50.229143784Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-13T23:12:50.229348474Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=204.43µs grafana | logger=migrator t=2025-06-13T23:12:50.233554788Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-13T23:12:50.23749211Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.936282ms grafana | logger=migrator t=2025-06-13T23:12:50.241100815Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-13T23:12:50.241502215Z level=info msg="Migration successfully executed" id="Update uid value" duration=400.2µs grafana | logger=migrator t=2025-06-13T23:12:50.24593575Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:50.246915868Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=979.858µs grafana | logger=migrator t=2025-06-13T23:12:50.251922582Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-13T23:12:50.253408904Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.485553ms grafana | logger=migrator t=2025-06-13T23:12:50.257169247Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-13T23:12:50.261878796Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.709839ms grafana | logger=migrator t=2025-06-13T23:12:50.268226825Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-13T23:12:50.270706815Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.479711ms grafana | logger=migrator t=2025-06-13T23:12:50.277409621Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-13T23:12:50.277446043Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=40.392µs grafana | logger=migrator t=2025-06-13T23:12:50.282996883Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-13T23:12:50.284345249Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.352325ms grafana | logger=migrator t=2025-06-13T23:12:50.288306921Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-13T23:12:50.289607704Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.300083ms grafana | logger=migrator t=2025-06-13T23:12:50.293815999Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-13T23:12:50.294558505Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=742.226µs grafana | logger=migrator t=2025-06-13T23:12:50.297357721Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-13T23:12:50.29814196Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=783.878µs grafana | logger=migrator t=2025-06-13T23:12:50.304376333Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-13T23:12:50.305202203Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=821.32µs grafana | logger=migrator t=2025-06-13T23:12:50.308156967Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-13T23:12:50.308959456Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=801.508µs grafana | logger=migrator t=2025-06-13T23:12:50.315940875Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-13T23:12:50.316730974Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=790.068µs grafana | logger=migrator t=2025-06-13T23:12:50.31995002Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-13T23:12:50.326894568Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.943108ms grafana | logger=migrator t=2025-06-13T23:12:50.330350356Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-13T23:12:50.33105807Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=706.464µs grafana | logger=migrator t=2025-06-13T23:12:50.336653253Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-13T23:12:50.337870372Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.21674ms grafana | logger=migrator t=2025-06-13T23:12:50.341369802Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-13T23:12:50.342581401Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.210169ms grafana | logger=migrator t=2025-06-13T23:12:50.346292601Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-13T23:12:50.347646467Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.353276ms grafana | logger=migrator t=2025-06-13T23:12:50.353602987Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:50.354158554Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=555.417µs grafana | logger=migrator t=2025-06-13T23:12:50.35757508Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-13T23:12:50.35838994Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=817.92µs grafana | logger=migrator t=2025-06-13T23:12:50.361409997Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-13T23:12:50.361433448Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.252µs grafana | logger=migrator t=2025-06-13T23:12:50.367512913Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-13T23:12:50.369554593Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.04128ms grafana | logger=migrator t=2025-06-13T23:12:50.372815661Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-13T23:12:50.374662301Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.84602ms grafana | logger=migrator t=2025-06-13T23:12:50.377993113Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-13T23:12:50.378153961Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=160.918µs grafana | logger=migrator t=2025-06-13T23:12:50.383587715Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-13T23:12:50.391390125Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=7.795649ms grafana | logger=migrator t=2025-06-13T23:12:50.395040402Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-13T23:12:50.397792296Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.753734ms grafana | logger=migrator t=2025-06-13T23:12:50.401038894Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-13T23:12:50.40178782Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=748.536µs grafana | logger=migrator t=2025-06-13T23:12:50.406835946Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-13T23:12:50.407403833Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=567.587µs grafana | logger=migrator t=2025-06-13T23:12:50.410572258Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-13T23:12:50.411380167Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=807.42µs grafana | logger=migrator t=2025-06-13T23:12:50.414492688Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-13T23:12:50.415516668Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.02226ms grafana | logger=migrator t=2025-06-13T23:12:50.421943801Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-13T23:12:50.423264865Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.321075ms grafana | logger=migrator t=2025-06-13T23:12:50.426626428Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-13T23:12:50.427866839Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.239341ms grafana | logger=migrator t=2025-06-13T23:12:50.431057964Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-13T23:12:50.431085015Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=28.001µs grafana | logger=migrator t=2025-06-13T23:12:50.436042976Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-13T23:12:50.436074408Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=31.092µs grafana | logger=migrator t=2025-06-13T23:12:50.439050853Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-13T23:12:50.441991856Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.939753ms grafana | logger=migrator t=2025-06-13T23:12:50.448626648Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-13T23:12:50.451371482Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.744464ms grafana | logger=migrator t=2025-06-13T23:12:50.454449592Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-13T23:12:50.454478423Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=29.022µs grafana | logger=migrator t=2025-06-13T23:12:50.45955066Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-13T23:12:50.460256884Z level=info msg="Migration successfully executed" id="create quota table v1" duration=705.234µs grafana | logger=migrator t=2025-06-13T23:12:50.463324743Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-13T23:12:50.464631077Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.305384ms grafana | logger=migrator t=2025-06-13T23:12:50.468979638Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-13T23:12:50.469005849Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.841µs grafana | logger=migrator t=2025-06-13T23:12:50.475047543Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-13T23:12:50.476969017Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.922944ms grafana | logger=migrator t=2025-06-13T23:12:50.481078637Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-13T23:12:50.482077715Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=998.318µs grafana | logger=migrator t=2025-06-13T23:12:50.487160992Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-13T23:12:50.493800885Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.638243ms grafana | logger=migrator t=2025-06-13T23:12:50.499286322Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-13T23:12:50.499362606Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=85.214µs grafana | logger=migrator t=2025-06-13T23:12:50.502922799Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-13T23:12:50.503447945Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=524.805µs grafana | logger=migrator t=2025-06-13T23:12:50.506542665Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-13T23:12:50.518516007Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=11.971602ms grafana | logger=migrator t=2025-06-13T23:12:50.52556048Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-13T23:12:50.526214452Z level=info msg="Migration successfully executed" id="create session table" duration=654.372µs grafana | logger=migrator t=2025-06-13T23:12:50.530020197Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-13T23:12:50.530106131Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.824µs grafana | logger=migrator t=2025-06-13T23:12:50.532200333Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-13T23:12:50.532278767Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=78.964µs grafana | logger=migrator t=2025-06-13T23:12:50.535250831Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-13T23:12:50.535901373Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=650.492µs grafana | logger=migrator t=2025-06-13T23:12:50.542604699Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-13T23:12:50.544786585Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=2.181696ms grafana | logger=migrator t=2025-06-13T23:12:50.549074084Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-13T23:12:50.549099895Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.741µs grafana | logger=migrator t=2025-06-13T23:12:50.551898551Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-13T23:12:50.551925822Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=31.651µs grafana | logger=migrator t=2025-06-13T23:12:50.556986878Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-13T23:12:50.561016654Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.029386ms grafana | logger=migrator t=2025-06-13T23:12:50.564652961Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-13T23:12:50.567863997Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.210686ms grafana | logger=migrator t=2025-06-13T23:12:50.573298452Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-13T23:12:50.573729083Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=430.681µs grafana | logger=migrator t=2025-06-13T23:12:50.578315886Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-13T23:12:50.578445312Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=131.266µs grafana | logger=migrator t=2025-06-13T23:12:50.58169906Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-13T23:12:50.582705169Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=983.848µs grafana | logger=migrator t=2025-06-13T23:12:50.585789319Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-13T23:12:50.585816041Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=27.371µs grafana | logger=migrator t=2025-06-13T23:12:50.591528708Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-13T23:12:50.59485315Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.323382ms grafana | logger=migrator t=2025-06-13T23:12:50.598627744Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-13T23:12:50.59876266Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=134.746µs grafana | logger=migrator t=2025-06-13T23:12:50.600892924Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-13T23:12:50.603167984Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.27451ms grafana | logger=migrator t=2025-06-13T23:12:50.609526294Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-13T23:12:50.612803753Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.278269ms grafana | logger=migrator t=2025-06-13T23:12:50.61582643Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-13T23:12:50.615865642Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=35.352µs grafana | logger=migrator t=2025-06-13T23:12:50.619745391Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-13T23:12:50.620737279Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=991.088µs grafana | logger=migrator t=2025-06-13T23:12:50.623615999Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-13T23:12:50.624588166Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=974.647µs grafana | logger=migrator t=2025-06-13T23:12:50.62899045Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-13T23:12:50.63001889Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.02773ms grafana | logger=migrator t=2025-06-13T23:12:50.633627646Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-13T23:12:50.634491848Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=863.192µs grafana | logger=migrator t=2025-06-13T23:12:50.638964385Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-13T23:12:50.639960954Z level=info msg="Migration successfully executed" id="add index alert state" duration=996.399µs grafana | logger=migrator t=2025-06-13T23:12:50.643271915Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-13T23:12:50.644133317Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=860.672µs grafana | logger=migrator t=2025-06-13T23:12:50.647435687Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-13T23:12:50.648080829Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=644.812µs grafana | logger=migrator t=2025-06-13T23:12:50.651333687Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-13T23:12:50.652202609Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=868.452µs grafana | logger=migrator t=2025-06-13T23:12:50.656869616Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-13T23:12:50.658012322Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.141286ms grafana | logger=migrator t=2025-06-13T23:12:50.661784985Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-13T23:12:50.678057557Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=16.265781ms grafana | logger=migrator t=2025-06-13T23:12:50.682196108Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-13T23:12:50.683016358Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=820.28µs grafana | logger=migrator t=2025-06-13T23:12:50.689271742Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-13T23:12:50.690395587Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.124245ms grafana | logger=migrator t=2025-06-13T23:12:50.694635243Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:50.694951538Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=316.175µs grafana | logger=migrator t=2025-06-13T23:12:50.698030598Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-13T23:12:50.698594276Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=567.227µs grafana | logger=migrator t=2025-06-13T23:12:50.70566917Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-13T23:12:50.70650208Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=832.53µs grafana | logger=migrator t=2025-06-13T23:12:50.709844753Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-13T23:12:50.716289586Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.439853ms grafana | logger=migrator t=2025-06-13T23:12:50.720410797Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-13T23:12:50.723136109Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.724292ms grafana | logger=migrator t=2025-06-13T23:12:50.728451558Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-13T23:12:50.732348567Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.896279ms grafana | logger=migrator t=2025-06-13T23:12:50.735226637Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-13T23:12:50.739475364Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.247357ms grafana | logger=migrator t=2025-06-13T23:12:50.74330728Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-13T23:12:50.744345171Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.037911ms grafana | logger=migrator t=2025-06-13T23:12:50.752777191Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-13T23:12:50.752834173Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=59.763µs grafana | logger=migrator t=2025-06-13T23:12:50.756684031Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-13T23:12:50.756715662Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=31.951µs grafana | logger=migrator t=2025-06-13T23:12:50.760614852Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-13T23:12:50.76180365Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.190648ms grafana | logger=migrator t=2025-06-13T23:12:50.765873528Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T23:12:50.766994002Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.119644ms grafana | logger=migrator t=2025-06-13T23:12:50.772413506Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-13T23:12:50.77331851Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=903.874µs grafana | logger=migrator t=2025-06-13T23:12:50.77682653Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-13T23:12:50.77824944Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.415549ms grafana | logger=migrator t=2025-06-13T23:12:50.78155164Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T23:12:50.783161378Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.608568ms grafana | logger=migrator t=2025-06-13T23:12:50.788756511Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-13T23:12:50.793210217Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.452957ms grafana | logger=migrator t=2025-06-13T23:12:50.796310558Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-13T23:12:50.801093951Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.779872ms grafana | logger=migrator t=2025-06-13T23:12:50.806655811Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-13T23:12:50.806956086Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=296.594µs grafana | logger=migrator t=2025-06-13T23:12:50.815503641Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:50.817024555Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.522664ms grafana | logger=migrator t=2025-06-13T23:12:50.821178637Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-13T23:12:50.822127714Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=944.886µs grafana | logger=migrator t=2025-06-13T23:12:50.827704015Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-13T23:12:50.832924409Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.219004ms grafana | logger=migrator t=2025-06-13T23:12:50.837665209Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-13T23:12:50.837693101Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=30.932µs grafana | logger=migrator t=2025-06-13T23:12:50.840587481Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-13T23:12:50.841464364Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=877.223µs grafana | logger=migrator t=2025-06-13T23:12:50.844347424Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-13T23:12:50.84508844Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=741.296µs grafana | logger=migrator t=2025-06-13T23:12:50.851118994Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-13T23:12:50.851235989Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=117.066µs grafana | logger=migrator t=2025-06-13T23:12:50.85412728Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-13T23:12:50.855212243Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.084323ms grafana | logger=migrator t=2025-06-13T23:12:50.858758225Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-13T23:12:50.859446349Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=687.463µs grafana | logger=migrator t=2025-06-13T23:12:50.865721354Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-13T23:12:50.867798765Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=2.073071ms grafana | logger=migrator t=2025-06-13T23:12:50.871210531Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-13T23:12:50.872580787Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.370996ms grafana | logger=migrator t=2025-06-13T23:12:50.876183163Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-13T23:12:50.877282156Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.098204ms grafana | logger=migrator t=2025-06-13T23:12:50.882931241Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-13T23:12:50.883884687Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=953.286µs grafana | logger=migrator t=2025-06-13T23:12:50.886927995Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-13T23:12:50.886957727Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.372µs grafana | logger=migrator t=2025-06-13T23:12:50.889940672Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.894569897Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.622864ms grafana | logger=migrator t=2025-06-13T23:12:50.898100889Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-13T23:12:50.899030904Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=929.696µs grafana | logger=migrator t=2025-06-13T23:12:50.903560984Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.908052233Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.490318ms grafana | logger=migrator t=2025-06-13T23:12:50.911575264Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-13T23:12:50.912277088Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=702.074µs grafana | logger=migrator t=2025-06-13T23:12:50.917903172Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-13T23:12:50.918887599Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=983.417µs grafana | logger=migrator t=2025-06-13T23:12:50.921787401Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-13T23:12:50.922669063Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=885.763µs grafana | logger=migrator t=2025-06-13T23:12:50.92588042Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-13T23:12:50.937010711Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.129111ms grafana | logger=migrator t=2025-06-13T23:12:50.942668886Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-13T23:12:50.943282696Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=614.34µs grafana | logger=migrator t=2025-06-13T23:12:50.946278512Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-13T23:12:50.947221548Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=941.785µs grafana | logger=migrator t=2025-06-13T23:12:50.949932999Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-13T23:12:50.950231604Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=298.105µs grafana | logger=migrator t=2025-06-13T23:12:50.953061062Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-13T23:12:50.953678182Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=616.58µs grafana | logger=migrator t=2025-06-13T23:12:50.959223441Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-13T23:12:50.959470123Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=249.672µs grafana | logger=migrator t=2025-06-13T23:12:50.962591915Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.967061202Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.468777ms grafana | logger=migrator t=2025-06-13T23:12:50.969989045Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.97502904Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.039405ms grafana | logger=migrator t=2025-06-13T23:12:50.980686045Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.981560308Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=874.022µs grafana | logger=migrator t=2025-06-13T23:12:50.98469134Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-13T23:12:50.986063797Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.371626ms grafana | logger=migrator t=2025-06-13T23:12:50.989334556Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-13T23:12:50.989695773Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=361.347µs grafana | logger=migrator t=2025-06-13T23:12:50.995132518Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-13T23:12:50.999389125Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.255817ms grafana | logger=migrator t=2025-06-13T23:12:51.002323436Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-13T23:12:51.003225119Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=900.793µs grafana | logger=migrator t=2025-06-13T23:12:51.006227123Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-13T23:12:51.006395041Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=167.558µs grafana | logger=migrator t=2025-06-13T23:12:51.00887688Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-13T23:12:51.009242918Z level=info msg="Migration successfully executed" id="Move region to single row" duration=365.348µs grafana | logger=migrator t=2025-06-13T23:12:51.014761473Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.015564751Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=802.368µs grafana | logger=migrator t=2025-06-13T23:12:51.01843898Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.019237108Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=797.549µs grafana | logger=migrator t=2025-06-13T23:12:51.025222605Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.026110668Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=887.033µs grafana | logger=migrator t=2025-06-13T23:12:51.029736462Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.030631255Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=894.113µs grafana | logger=migrator t=2025-06-13T23:12:51.03323378Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.034023738Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=791.998µs grafana | logger=migrator t=2025-06-13T23:12:51.038879521Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-13T23:12:51.039725772Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=845.351µs grafana | logger=migrator t=2025-06-13T23:12:51.042281575Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-13T23:12:51.042302866Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=20.691µs grafana | logger=migrator t=2025-06-13T23:12:51.044497801Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T23:12:51.044516272Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=19.061µs grafana | logger=migrator t=2025-06-13T23:12:51.050752032Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T23:12:51.050773273Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=21.201µs grafana | logger=migrator t=2025-06-13T23:12:51.053079584Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-13T23:12:51.054368446Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.26823ms grafana | logger=migrator t=2025-06-13T23:12:51.057665564Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-13T23:12:51.058963146Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.296552ms grafana | logger=migrator t=2025-06-13T23:12:51.062484745Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-13T23:12:51.063321926Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=837.201µs grafana | logger=migrator t=2025-06-13T23:12:51.068358658Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-13T23:12:51.069251521Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=892.572µs grafana | logger=migrator t=2025-06-13T23:12:51.072039964Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-13T23:12:51.072218743Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=176.399µs grafana | logger=migrator t=2025-06-13T23:12:51.075750643Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-13T23:12:51.076096729Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=345.636µs grafana | logger=migrator t=2025-06-13T23:12:51.082479336Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T23:12:51.082506487Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=28.191µs grafana | logger=migrator t=2025-06-13T23:12:51.085002617Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-13T23:12:51.091793533Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=6.791686ms grafana | logger=migrator t=2025-06-13T23:12:51.094457161Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-13T23:12:51.095206397Z level=info msg="Migration successfully executed" id="create team table" duration=748.126µs grafana | logger=migrator t=2025-06-13T23:12:51.100379086Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-13T23:12:51.101247698Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=867.551µs grafana | logger=migrator t=2025-06-13T23:12:51.10545172Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-13T23:12:51.106581624Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.129075ms grafana | logger=migrator t=2025-06-13T23:12:51.110332814Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-13T23:12:51.11483896Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.503176ms grafana | logger=migrator t=2025-06-13T23:12:51.117441506Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-13T23:12:51.117615914Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=173.869µs grafana | logger=migrator t=2025-06-13T23:12:51.122483418Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:51.12335624Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=872.692µs grafana | logger=migrator t=2025-06-13T23:12:51.126176515Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-13T23:12:51.130706873Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.525907ms grafana | logger=migrator t=2025-06-13T23:12:51.134118607Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-13T23:12:51.138778211Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.648163ms grafana | logger=migrator t=2025-06-13T23:12:51.14376194Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-13T23:12:51.144553218Z level=info msg="Migration successfully executed" id="create team member table" duration=790.348µs grafana | logger=migrator t=2025-06-13T23:12:51.147416525Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-13T23:12:51.148393002Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=974.997µs grafana | logger=migrator t=2025-06-13T23:12:51.151552514Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-13T23:12:51.15251326Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=959.796µs grafana | logger=migrator t=2025-06-13T23:12:51.157308571Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-13T23:12:51.158606613Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.296562ms grafana | logger=migrator t=2025-06-13T23:12:51.162543062Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-13T23:12:51.170126066Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.583844ms grafana | logger=migrator t=2025-06-13T23:12:51.172738242Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-13T23:12:51.176086053Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.347071ms grafana | logger=migrator t=2025-06-13T23:12:51.181260981Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-13T23:12:51.186039951Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.77777ms grafana | logger=migrator t=2025-06-13T23:12:51.189893186Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-13T23:12:51.19081523Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=920.154µs grafana | logger=migrator t=2025-06-13T23:12:51.193739261Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-13T23:12:51.194572211Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=832.06µs grafana | logger=migrator t=2025-06-13T23:12:51.200015042Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-13T23:12:51.200964518Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=948.826µs grafana | logger=migrator t=2025-06-13T23:12:51.204148841Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-13T23:12:51.205090866Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=941.265µs grafana | logger=migrator t=2025-06-13T23:12:51.208115641Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-13T23:12:51.209071927Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=955.666µs grafana | logger=migrator t=2025-06-13T23:12:51.21558456Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-13T23:12:51.217299883Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.715622ms grafana | logger=migrator t=2025-06-13T23:12:51.220658824Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-13T23:12:51.222149466Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.489601ms grafana | logger=migrator t=2025-06-13T23:12:51.225335579Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-13T23:12:51.226264883Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=928.114µs grafana | logger=migrator t=2025-06-13T23:12:51.229191724Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-13T23:12:51.231159188Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.965824ms grafana | logger=migrator t=2025-06-13T23:12:51.234980622Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-13T23:12:51.235826323Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=845.12µs grafana | logger=migrator t=2025-06-13T23:12:51.241407231Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-13T23:12:51.241732626Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=324.795µs grafana | logger=migrator t=2025-06-13T23:12:51.245256756Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-13T23:12:51.246043053Z level=info msg="Migration successfully executed" id="create tag table" duration=786.017µs grafana | logger=migrator t=2025-06-13T23:12:51.251818771Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-13T23:12:51.252790228Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=971.446µs grafana | logger=migrator t=2025-06-13T23:12:51.256306326Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-13T23:12:51.257690683Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.383877ms grafana | logger=migrator t=2025-06-13T23:12:51.261737657Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-13T23:12:51.263584806Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.839869ms grafana | logger=migrator t=2025-06-13T23:12:51.268129314Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-13T23:12:51.272975497Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=4.845213ms grafana | logger=migrator t=2025-06-13T23:12:51.276517787Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:51.291018494Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.499647ms grafana | logger=migrator t=2025-06-13T23:12:51.29427597Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-13T23:12:51.295012866Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=736.616µs grafana | logger=migrator t=2025-06-13T23:12:51.299155505Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-13T23:12:51.30009305Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=937.295µs grafana | logger=migrator t=2025-06-13T23:12:51.304024499Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:51.304431148Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=406.719µs grafana | logger=migrator t=2025-06-13T23:12:51.307810151Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:51.308597568Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=786.757µs grafana | logger=migrator t=2025-06-13T23:12:51.313185039Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-13T23:12:51.314472391Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.287612ms grafana | logger=migrator t=2025-06-13T23:12:51.319728623Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-13T23:12:51.32132106Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.591307ms grafana | logger=migrator t=2025-06-13T23:12:51.324608128Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-13T23:12:51.324629709Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=22.292µs grafana | logger=migrator t=2025-06-13T23:12:51.329300003Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.337906406Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.605073ms grafana | logger=migrator t=2025-06-13T23:12:51.341601694Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.345324143Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.715798ms grafana | logger=migrator t=2025-06-13T23:12:51.349927284Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.355437039Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.509044ms grafana | logger=migrator t=2025-06-13T23:12:51.359506954Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.36504931Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.544546ms grafana | logger=migrator t=2025-06-13T23:12:51.368286446Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.369337516Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.04655ms grafana | logger=migrator t=2025-06-13T23:12:51.372761021Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.378101677Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.339786ms grafana | logger=migrator t=2025-06-13T23:12:51.38586692Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-13T23:12:51.394862363Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=8.992942ms grafana | logger=migrator t=2025-06-13T23:12:51.398494437Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-13T23:12:51.399079455Z level=info msg="Migration successfully executed" id="create server_lock table" duration=584.178µs grafana | logger=migrator t=2025-06-13T23:12:51.4020905Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-13T23:12:51.402877138Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=784.047µs grafana | logger=migrator t=2025-06-13T23:12:51.407698199Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-13T23:12:51.409286005Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.586336ms grafana | logger=migrator t=2025-06-13T23:12:51.413907067Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-13T23:12:51.414918516Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.010889ms grafana | logger=migrator t=2025-06-13T23:12:51.419115698Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-13T23:12:51.420184599Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.068511ms grafana | logger=migrator t=2025-06-13T23:12:51.424690215Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-13T23:12:51.425732446Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.04204ms grafana | logger=migrator t=2025-06-13T23:12:51.429262205Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-13T23:12:51.437502941Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.237766ms grafana | logger=migrator t=2025-06-13T23:12:51.441417349Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-13T23:12:51.442164635Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=746.836µs grafana | logger=migrator t=2025-06-13T23:12:51.446715444Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-13T23:12:51.455474874Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.756971ms grafana | logger=migrator t=2025-06-13T23:12:51.459529199Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-13T23:12:51.460188171Z level=info msg="Migration successfully executed" id="create cache_data table" duration=658.042µs grafana | logger=migrator t=2025-06-13T23:12:51.46412676Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-13T23:12:51.465652743Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.523673ms grafana | logger=migrator t=2025-06-13T23:12:51.47057518Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-13T23:12:51.471870452Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.294772ms grafana | logger=migrator t=2025-06-13T23:12:51.475766629Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-13T23:12:51.476802679Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.0354ms grafana | logger=migrator t=2025-06-13T23:12:51.480530138Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T23:12:51.480595601Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.601µs grafana | logger=migrator t=2025-06-13T23:12:51.486485514Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-13T23:12:51.486999329Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=512.574µs grafana | logger=migrator t=2025-06-13T23:12:51.493187926Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-13T23:12:51.494343112Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.154945ms grafana | logger=migrator t=2025-06-13T23:12:51.497982036Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T23:12:51.499533331Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.545084ms grafana | logger=migrator t=2025-06-13T23:12:51.503541243Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T23:12:51.50555608Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.013597ms grafana | logger=migrator t=2025-06-13T23:12:51.510301068Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T23:12:51.510326139Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=25.741µs grafana | logger=migrator t=2025-06-13T23:12:51.512908893Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T23:12:51.513910902Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.001258ms grafana | logger=migrator t=2025-06-13T23:12:51.517724105Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T23:12:51.518646189Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=921.904µs grafana | logger=migrator t=2025-06-13T23:12:51.524027207Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T23:12:51.525624514Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.596547ms grafana | logger=migrator t=2025-06-13T23:12:51.529347023Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T23:12:51.530884167Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.536334ms grafana | logger=migrator t=2025-06-13T23:12:51.535238636Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-13T23:12:51.541282946Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.04193ms grafana | logger=migrator t=2025-06-13T23:12:51.544949163Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-13T23:12:51.545864437Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=913.754µs grafana | logger=migrator t=2025-06-13T23:12:51.549395966Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-13T23:12:51.54948038Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=84.074µs grafana | logger=migrator t=2025-06-13T23:12:51.553907893Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-13T23:12:51.555413435Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.504752ms grafana | logger=migrator t=2025-06-13T23:12:51.559151745Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-13T23:12:51.560791364Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.635538ms grafana | logger=migrator t=2025-06-13T23:12:51.564370736Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-13T23:12:51.56550155Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.130265ms grafana | logger=migrator t=2025-06-13T23:12:51.569912012Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T23:12:51.569932533Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=21.401µs grafana | logger=migrator t=2025-06-13T23:12:51.574521823Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-13T23:12:51.575927571Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.400567ms grafana | logger=migrator t=2025-06-13T23:12:51.579543014Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-13T23:12:51.580520171Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=976.567µs grafana | logger=migrator t=2025-06-13T23:12:51.584900242Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-13T23:12:51.585868938Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=967.816µs grafana | logger=migrator t=2025-06-13T23:12:51.589141456Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-13T23:12:51.590104732Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=965.097µs grafana | logger=migrator t=2025-06-13T23:12:51.595615437Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.60525767Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.640674ms grafana | logger=migrator t=2025-06-13T23:12:51.609087984Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.609743345Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=655.181µs grafana | logger=migrator t=2025-06-13T23:12:51.612896757Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.61359032Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=691.753µs grafana | logger=migrator t=2025-06-13T23:12:51.618203122Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.647249717Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.046286ms grafana | logger=migrator t=2025-06-13T23:12:51.650823609Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.677649877Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.824678ms grafana | logger=migrator t=2025-06-13T23:12:51.680991588Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.682000386Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.009608ms grafana | logger=migrator t=2025-06-13T23:12:51.686766515Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.687717491Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=950.876µs grafana | logger=migrator t=2025-06-13T23:12:51.692129963Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-13T23:12:51.698092749Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.962046ms grafana | logger=migrator t=2025-06-13T23:12:51.70144147Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-13T23:12:51.707501281Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.03631ms grafana | logger=migrator t=2025-06-13T23:12:51.712970684Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:51.714060917Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.088862ms grafana | logger=migrator t=2025-06-13T23:12:51.718955282Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-13T23:12:51.720664824Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.709272ms grafana | logger=migrator t=2025-06-13T23:12:51.724412414Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-13T23:12:51.726158098Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.744894ms grafana | logger=migrator t=2025-06-13T23:12:51.730413172Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-13T23:12:51.73141092Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=996.898µs grafana | logger=migrator t=2025-06-13T23:12:51.734991912Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T23:12:51.735012773Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=21.211µs grafana | logger=migrator t=2025-06-13T23:12:51.738822026Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.747998717Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.176581ms grafana | logger=migrator t=2025-06-13T23:12:51.752931644Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.761951377Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.019973ms grafana | logger=migrator t=2025-06-13T23:12:51.765875806Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.772701774Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.824827ms grafana | logger=migrator t=2025-06-13T23:12:51.776130178Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-13T23:12:51.777175839Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.025839ms grafana | logger=migrator t=2025-06-13T23:12:51.781535468Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-13T23:12:51.782690153Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.155425ms grafana | logger=migrator t=2025-06-13T23:12:51.786143559Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.797222682Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=11.076382ms grafana | logger=migrator t=2025-06-13T23:12:51.800970342Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.805529191Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.557808ms grafana | logger=migrator t=2025-06-13T23:12:51.81052309Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-13T23:12:51.811571801Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.047371ms grafana | logger=migrator t=2025-06-13T23:12:51.815139432Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:51.82528383Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.141927ms grafana | logger=migrator t=2025-06-13T23:12:51.829302123Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:51.833715915Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.412822ms grafana | logger=migrator t=2025-06-13T23:12:51.838777698Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:51.838799689Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=23.061µs grafana | logger=migrator t=2025-06-13T23:12:51.842902486Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:51.844171007Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.271211ms grafana | logger=migrator t=2025-06-13T23:12:51.849469551Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T23:12:51.850583685Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.114204ms grafana | logger=migrator t=2025-06-13T23:12:51.854886042Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-13T23:12:51.856602974Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.715882ms grafana | logger=migrator t=2025-06-13T23:12:51.860751813Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T23:12:51.860788215Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=37.762µs grafana | logger=migrator t=2025-06-13T23:12:51.865801466Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:51.873018233Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.215237ms grafana | logger=migrator t=2025-06-13T23:12:51.876977523Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:51.883607991Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.626408ms grafana | logger=migrator t=2025-06-13T23:12:51.887522829Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:51.894369148Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.846459ms grafana | logger=migrator t=2025-06-13T23:12:51.902508669Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:51.913517388Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=11.009729ms grafana | logger=migrator t=2025-06-13T23:12:51.916624557Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-13T23:12:51.921343854Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.718857ms grafana | logger=migrator t=2025-06-13T23:12:51.926944003Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:51.926964384Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=20.721µs grafana | logger=migrator t=2025-06-13T23:12:51.932780964Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-13T23:12:51.934246384Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.46445ms grafana | logger=migrator t=2025-06-13T23:12:51.939323518Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-13T23:12:51.95103267Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.702022ms grafana | logger=migrator t=2025-06-13T23:12:51.954991231Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-13T23:12:51.955111406Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=121.026µs grafana | logger=migrator t=2025-06-13T23:12:51.958792373Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-13T23:12:51.965561098Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.781346ms grafana | logger=migrator t=2025-06-13T23:12:51.97038109Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-13T23:12:51.971475522Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.093062ms grafana | logger=migrator t=2025-06-13T23:12:51.974888046Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-13T23:12:51.985739958Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.852932ms grafana | logger=migrator t=2025-06-13T23:12:51.989091559Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-13T23:12:51.989804153Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=712.054µs grafana | logger=migrator t=2025-06-13T23:12:51.99453254Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-13T23:12:51.995733138Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.200608ms grafana | logger=migrator t=2025-06-13T23:12:51.999408284Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-13T23:12:52.0059727Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.563586ms grafana | logger=migrator t=2025-06-13T23:12:52.01014391Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-13T23:12:52.011071285Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=926.905µs grafana | logger=migrator t=2025-06-13T23:12:52.016851462Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-13T23:12:52.018327063Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.475861ms grafana | logger=migrator t=2025-06-13T23:12:52.021475244Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-13T23:12:52.022408669Z level=info msg="Migration successfully executed" id="create alert_image table" duration=933.085µs grafana | logger=migrator t=2025-06-13T23:12:52.025702848Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-13T23:12:52.026756408Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.053121ms grafana | logger=migrator t=2025-06-13T23:12:52.031657654Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-13T23:12:52.031711016Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=53.623µs grafana | logger=migrator t=2025-06-13T23:12:52.034984813Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-13T23:12:52.036056145Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.071242ms grafana | logger=migrator t=2025-06-13T23:12:52.040364482Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-13T23:12:52.041559059Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.193667ms grafana | logger=migrator t=2025-06-13T23:12:52.046593151Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T23:12:52.047052463Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T23:12:52.050123651Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-13T23:12:52.050620875Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=496.973µs grafana | logger=migrator t=2025-06-13T23:12:52.05302873Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-13T23:12:52.054081411Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.052281ms grafana | logger=migrator t=2025-06-13T23:12:52.058482362Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-13T23:12:52.065451307Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.968585ms grafana | logger=migrator t=2025-06-13T23:12:52.069129354Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-13T23:12:52.070228226Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.098762ms grafana | logger=migrator t=2025-06-13T23:12:52.075674758Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-13T23:12:52.077598951Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.925902ms grafana | logger=migrator t=2025-06-13T23:12:52.081455336Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-13T23:12:52.082941047Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.483961ms grafana | logger=migrator t=2025-06-13T23:12:52.086800553Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-13T23:12:52.087982229Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.181387ms grafana | logger=migrator t=2025-06-13T23:12:52.092617232Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:52.094000108Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.381226ms grafana | logger=migrator t=2025-06-13T23:12:52.099214319Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-13T23:12:52.099348985Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=134.746µs grafana | logger=migrator t=2025-06-13T23:12:52.104121865Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-13T23:12:52.104173997Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=52.432µs grafana | logger=migrator t=2025-06-13T23:12:52.1075674Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-13T23:12:52.118311216Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.744226ms grafana | logger=migrator t=2025-06-13T23:12:52.123219782Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-13T23:12:52.12379039Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=569.867µs grafana | logger=migrator t=2025-06-13T23:12:52.127260736Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-13T23:12:52.128511836Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.25069ms grafana | logger=migrator t=2025-06-13T23:12:52.132114189Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-13T23:12:52.13254514Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=430.431µs grafana | logger=migrator t=2025-06-13T23:12:52.136150743Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-13T23:12:52.137300299Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.149225ms grafana | logger=migrator t=2025-06-13T23:12:52.141768743Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-13T23:12:52.1429422Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.171426ms grafana | logger=migrator t=2025-06-13T23:12:52.147897458Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-13T23:12:52.182336062Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.442865ms grafana | logger=migrator t=2025-06-13T23:12:52.186313123Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-13T23:12:52.191577466Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.259543ms grafana | logger=migrator t=2025-06-13T23:12:52.195884223Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-13T23:12:52.196073942Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=189.029µs grafana | logger=migrator t=2025-06-13T23:12:52.200782178Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-13T23:12:52.229191223Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.409035ms grafana | logger=migrator t=2025-06-13T23:12:52.233364803Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-13T23:12:52.26056062Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.194777ms grafana | logger=migrator t=2025-06-13T23:12:52.265618073Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-13T23:12:52.266369069Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=750.416µs grafana | logger=migrator t=2025-06-13T23:12:52.269830035Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-13T23:12:52.271102546Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.270761ms grafana | logger=migrator t=2025-06-13T23:12:52.27596158Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-13T23:12:52.276420262Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=458.262µs grafana | logger=migrator t=2025-06-13T23:12:52.281634762Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-13T23:12:52.282641981Z level=info msg="Migration successfully executed" id="create permission table" duration=1.006568ms grafana | logger=migrator t=2025-06-13T23:12:52.286288436Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-13T23:12:52.287388529Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.099923ms grafana | logger=migrator t=2025-06-13T23:12:52.291005852Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-13T23:12:52.292113226Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.106814ms grafana | logger=migrator t=2025-06-13T23:12:52.296962709Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-13T23:12:52.297945216Z level=info msg="Migration successfully executed" id="create role table" duration=982.278µs grafana | logger=migrator t=2025-06-13T23:12:52.301646624Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-13T23:12:52.30949171Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.843297ms grafana | logger=migrator t=2025-06-13T23:12:52.313402168Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-13T23:12:52.320885148Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.48238ms grafana | logger=migrator t=2025-06-13T23:12:52.325782963Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-13T23:12:52.326951339Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.168056ms grafana | logger=migrator t=2025-06-13T23:12:52.330761002Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-13T23:12:52.331916408Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.154686ms grafana | logger=migrator t=2025-06-13T23:12:52.335792734Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:52.337015873Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.218868ms grafana | logger=migrator t=2025-06-13T23:12:52.341980831Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-13T23:12:52.343013051Z level=info msg="Migration successfully executed" id="create team role table" duration=1.03194ms grafana | logger=migrator t=2025-06-13T23:12:52.349391237Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-13T23:12:52.350554753Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.162686ms grafana | logger=migrator t=2025-06-13T23:12:52.354795627Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-13T23:12:52.357038515Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.244938ms grafana | logger=migrator t=2025-06-13T23:12:52.362617613Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-13T23:12:52.363896304Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.278242ms grafana | logger=migrator t=2025-06-13T23:12:52.368003861Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-13T23:12:52.369052842Z level=info msg="Migration successfully executed" id="create user role table" duration=1.048231ms grafana | logger=migrator t=2025-06-13T23:12:52.373405341Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-13T23:12:52.374616749Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.210898ms grafana | logger=migrator t=2025-06-13T23:12:52.379690053Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-13T23:12:52.381759702Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.068079ms grafana | logger=migrator t=2025-06-13T23:12:52.3862863Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-13T23:12:52.388122808Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.835909ms grafana | logger=migrator t=2025-06-13T23:12:52.392677717Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-13T23:12:52.393673984Z level=info msg="Migration successfully executed" id="create builtin role table" duration=995.637µs grafana | logger=migrator t=2025-06-13T23:12:52.397815673Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-13T23:12:52.399763467Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.946144ms grafana | logger=migrator t=2025-06-13T23:12:52.405960525Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-13T23:12:52.407654986Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.695481ms grafana | logger=migrator t=2025-06-13T23:12:52.411862308Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-13T23:12:52.417844176Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.981178ms grafana | logger=migrator t=2025-06-13T23:12:52.422768842Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-13T23:12:52.423926648Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.156986ms grafana | logger=migrator t=2025-06-13T23:12:52.429207952Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-13T23:12:52.430413439Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.204808ms grafana | logger=migrator t=2025-06-13T23:12:52.435494854Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:52.436681211Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.185897ms grafana | logger=migrator t=2025-06-13T23:12:52.440384808Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-13T23:12:52.442242448Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.856739ms grafana | logger=migrator t=2025-06-13T23:12:52.448423735Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-13T23:12:52.449484596Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.06068ms grafana | logger=migrator t=2025-06-13T23:12:52.45436583Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-13T23:12:52.455571438Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.202608ms grafana | logger=migrator t=2025-06-13T23:12:52.461205339Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-13T23:12:52.469403142Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.196793ms grafana | logger=migrator t=2025-06-13T23:12:52.474217774Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-13T23:12:52.482164475Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.945961ms grafana | logger=migrator t=2025-06-13T23:12:52.485689945Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-13T23:12:52.491449272Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.758796ms grafana | logger=migrator t=2025-06-13T23:12:52.49642084Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-13T23:12:52.504774982Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.357312ms grafana | logger=migrator t=2025-06-13T23:12:52.50952259Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-13T23:12:52.510693146Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.169686ms grafana | logger=migrator t=2025-06-13T23:12:52.514679387Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-13T23:12:52.515848934Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.168557ms grafana | logger=migrator t=2025-06-13T23:12:52.521463163Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-13T23:12:52.522573027Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.109594ms grafana | logger=migrator t=2025-06-13T23:12:52.527627049Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-13T23:12:52.535642165Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.014195ms grafana | logger=migrator t=2025-06-13T23:12:52.53970546Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-13T23:12:52.540899127Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.193217ms grafana | logger=migrator t=2025-06-13T23:12:52.54471691Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-13T23:12:52.545800292Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.083042ms grafana | logger=migrator t=2025-06-13T23:12:52.552467263Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-13T23:12:52.55345419Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=986.367µs grafana | logger=migrator t=2025-06-13T23:12:52.557165348Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-13T23:12:52.558300333Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.134025ms grafana | logger=migrator t=2025-06-13T23:12:52.562973137Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T23:12:52.563080213Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=96.435µs grafana | logger=migrator t=2025-06-13T23:12:52.56885628Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-13T23:12:52.570605724Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.749424ms grafana | logger=migrator t=2025-06-13T23:12:52.574174656Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-13T23:12:52.574225668Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=50.873µs grafana | logger=migrator t=2025-06-13T23:12:52.579400457Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-13T23:12:52.579915921Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=515.134µs grafana | logger=migrator t=2025-06-13T23:12:52.583205839Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-13T23:12:52.584269691Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.064941ms grafana | logger=migrator t=2025-06-13T23:12:52.58799615Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-13T23:12:52.589147935Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.150826ms grafana | logger=migrator t=2025-06-13T23:12:52.592432313Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-13T23:12:52.592742368Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=309.694µs grafana | logger=migrator t=2025-06-13T23:12:52.597208222Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-13T23:12:52.59778639Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=577.288µs grafana | logger=migrator t=2025-06-13T23:12:52.601134931Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-13T23:12:52.602118148Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=979.237µs grafana | logger=migrator t=2025-06-13T23:12:52.606790382Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-13T23:12:52.607976609Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.185857ms grafana | logger=migrator t=2025-06-13T23:12:52.613003551Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-13T23:12:52.621496469Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.491888ms grafana | logger=migrator t=2025-06-13T23:12:52.626527991Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-13T23:12:52.626565082Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=38.382µs grafana | logger=migrator t=2025-06-13T23:12:52.630835528Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-13T23:12:52.632524379Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.686731ms grafana | logger=migrator t=2025-06-13T23:12:52.637235285Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-13T23:12:52.638947377Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.710912ms grafana | logger=migrator t=2025-06-13T23:12:52.644223311Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-13T23:12:52.646220177Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.995676ms grafana | logger=migrator t=2025-06-13T23:12:52.65044473Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-13T23:12:52.659754127Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.308368ms grafana | logger=migrator t=2025-06-13T23:12:52.663643914Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.666020328Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.380205ms grafana | logger=migrator t=2025-06-13T23:12:52.670744945Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.672899428Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.154013ms grafana | logger=migrator t=2025-06-13T23:12:52.676621487Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:52.701511173Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.888696ms grafana | logger=migrator t=2025-06-13T23:12:52.706500772Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-13T23:12:52.707270059Z level=info msg="Migration successfully executed" id="create correlation v2" duration=768.967µs grafana | logger=migrator t=2025-06-13T23:12:52.710730036Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.711520794Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=790.158µs grafana | logger=migrator t=2025-06-13T23:12:52.716398158Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.718146222Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.747374ms grafana | logger=migrator t=2025-06-13T23:12:52.72539918Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-13T23:12:52.728061758Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.660748ms grafana | logger=migrator t=2025-06-13T23:12:52.732394686Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:52.732823657Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=429.141µs grafana | logger=migrator t=2025-06-13T23:12:52.73705602Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:52.738377484Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.324853ms grafana | logger=migrator t=2025-06-13T23:12:52.743439407Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-13T23:12:52.752497362Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.056965ms grafana | logger=migrator t=2025-06-13T23:12:52.756651952Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-13T23:12:52.764968071Z level=info msg="Migration successfully executed" id="add type column" duration=8.31599ms grafana | logger=migrator t=2025-06-13T23:12:52.76826871Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-13T23:12:52.7689079Z level=info msg="Migration successfully executed" id="create entity_events table" duration=638.88µs grafana | logger=migrator t=2025-06-13T23:12:52.77409556Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-13T23:12:52.775104878Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.007869ms grafana | logger=migrator t=2025-06-13T23:12:52.779917779Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.78098217Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.786037133Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.786790269Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.790455956Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-13T23:12:52.791221652Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=764.997µs grafana | logger=migrator t=2025-06-13T23:12:52.800022125Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-13T23:12:52.801956128Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.933683ms grafana | logger=migrator t=2025-06-13T23:12:52.807068264Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.808438999Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.409267ms grafana | logger=migrator t=2025-06-13T23:12:52.814744672Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T23:12:52.815977922Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.232289ms grafana | logger=migrator t=2025-06-13T23:12:52.820565862Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.821684286Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.117534ms grafana | logger=migrator t=2025-06-13T23:12:52.826371451Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.827427612Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.05485ms grafana | logger=migrator t=2025-06-13T23:12:52.834937812Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-13T23:12:52.836689107Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.750524ms grafana | logger=migrator t=2025-06-13T23:12:52.840803894Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-13T23:12:52.842570579Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.765615ms grafana | logger=migrator t=2025-06-13T23:12:52.847533707Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.848323805Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=789.298µs grafana | logger=migrator t=2025-06-13T23:12:52.851717078Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:52.852756258Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.03719ms grafana | logger=migrator t=2025-06-13T23:12:52.85882292Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-13T23:12:52.860806635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.973184ms grafana | logger=migrator t=2025-06-13T23:12:52.865619675Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-13T23:12:52.885813245Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.19303ms grafana | logger=migrator t=2025-06-13T23:12:52.893182569Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-13T23:12:52.901926909Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.7434ms grafana | logger=migrator t=2025-06-13T23:12:52.905743813Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-13T23:12:52.912369291Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.624098ms grafana | logger=migrator t=2025-06-13T23:12:52.91817271Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-13T23:12:52.918403561Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=230.231µs grafana | logger=migrator t=2025-06-13T23:12:52.921145243Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-13T23:12:52.929686243Z level=info msg="Migration successfully executed" id="add share column" duration=8.54054ms grafana | logger=migrator t=2025-06-13T23:12:52.933140209Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-13T23:12:52.933267795Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=126.336µs grafana | logger=migrator t=2025-06-13T23:12:52.937430875Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-13T23:12:52.938275196Z level=info msg="Migration successfully executed" id="create file table" duration=844.051µs grafana | logger=migrator t=2025-06-13T23:12:52.944426761Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-13T23:12:52.946293701Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.86628ms grafana | logger=migrator t=2025-06-13T23:12:52.950285843Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-13T23:12:52.951421347Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.134815ms grafana | logger=migrator t=2025-06-13T23:12:52.956117703Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-13T23:12:52.957500779Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.381476ms grafana | logger=migrator t=2025-06-13T23:12:52.961137664Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-13T23:12:52.962979912Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.842488ms grafana | logger=migrator t=2025-06-13T23:12:52.968618713Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-13T23:12:52.968639184Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.461µs grafana | logger=migrator t=2025-06-13T23:12:52.976013339Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-13T23:12:52.976033499Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.721µs grafana | logger=migrator t=2025-06-13T23:12:52.980654951Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-13T23:12:52.981207148Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=551.087µs grafana | logger=migrator t=2025-06-13T23:12:52.986144995Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-13T23:12:52.986514473Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=368.278µs grafana | logger=migrator t=2025-06-13T23:12:52.990295265Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-13T23:12:52.991979886Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.683971ms grafana | logger=migrator t=2025-06-13T23:12:52.99539729Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-13T23:12:53.00581263Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.41491ms grafana | logger=migrator t=2025-06-13T23:12:53.010060804Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-13T23:12:53.010218712Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.608µs grafana | logger=migrator t=2025-06-13T23:12:53.012990485Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-13T23:12:53.014112809Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.121924ms grafana | logger=migrator t=2025-06-13T23:12:53.018433086Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-13T23:12:53.018971032Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=540.346µs grafana | logger=migrator t=2025-06-13T23:12:53.02329301Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-13T23:12:53.023517721Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=224.38µs grafana | logger=migrator t=2025-06-13T23:12:53.02705165Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-13T23:12:53.027535044Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=482.883µs grafana | logger=migrator t=2025-06-13T23:12:53.031972187Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-13T23:12:53.041133387Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.16009ms grafana | logger=migrator t=2025-06-13T23:12:53.054898338Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-13T23:12:53.062477262Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.581464ms grafana | logger=migrator t=2025-06-13T23:12:53.067151307Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-13T23:12:53.067955915Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=803.898µs grafana | logger=migrator t=2025-06-13T23:12:53.072481833Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-13T23:12:53.142799281Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=70.316548ms grafana | logger=migrator t=2025-06-13T23:12:53.147443874Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-13T23:12:53.148338707Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=894.663µs grafana | logger=migrator t=2025-06-13T23:12:53.154070772Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-13T23:12:53.154877761Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=806.389µs grafana | logger=migrator t=2025-06-13T23:12:53.160465999Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-13T23:12:53.18317093Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.704121ms grafana | logger=migrator t=2025-06-13T23:12:53.18816356Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-13T23:12:53.195601637Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.432477ms grafana | logger=migrator t=2025-06-13T23:12:53.199448972Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-13T23:12:53.199668473Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=218.911µs grafana | logger=migrator t=2025-06-13T23:12:53.204697334Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-13T23:12:53.204856072Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=158.528µs grafana | logger=migrator t=2025-06-13T23:12:53.209232672Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-13T23:12:53.20939835Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=165.358µs grafana | logger=migrator t=2025-06-13T23:12:53.213890936Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-13T23:12:53.214139678Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=246.542µs grafana | logger=migrator t=2025-06-13T23:12:53.218636634Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-13T23:12:53.218788271Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=149.937µs grafana | logger=migrator t=2025-06-13T23:12:53.223253736Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-13T23:12:53.223881636Z level=info msg="Migration successfully executed" id="create folder table" duration=627.66µs grafana | logger=migrator t=2025-06-13T23:12:53.228219814Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-13T23:12:53.229014032Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=793.608µs grafana | logger=migrator t=2025-06-13T23:12:53.234716356Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-13T23:12:53.235575608Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=858.642µs grafana | logger=migrator t=2025-06-13T23:12:53.24540975Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-13T23:12:53.245428491Z level=info msg="Migration successfully executed" id="Update folder title length" duration=18.761µs grafana | logger=migrator t=2025-06-13T23:12:53.249938638Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T23:12:53.250739766Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=800.558µs grafana | logger=migrator t=2025-06-13T23:12:53.255832911Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T23:12:53.256626569Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=793.338µs grafana | logger=migrator t=2025-06-13T23:12:53.260227372Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-13T23:12:53.261063872Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=835.75µs grafana | logger=migrator t=2025-06-13T23:12:53.265855302Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-13T23:12:53.266155947Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=300.235µs grafana | logger=migrator t=2025-06-13T23:12:53.269455175Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-13T23:12:53.269637804Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=182.379µs grafana | logger=migrator t=2025-06-13T23:12:53.272907161Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-13T23:12:53.273865057Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=960.146µs grafana | logger=migrator t=2025-06-13T23:12:53.277428978Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-13T23:12:53.278238097Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=808.739µs grafana | logger=migrator t=2025-06-13T23:12:53.282562585Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T23:12:53.283308151Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=745.076µs grafana | logger=migrator t=2025-06-13T23:12:53.287387067Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T23:12:53.28828352Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=896.033µs grafana | logger=migrator t=2025-06-13T23:12:53.293239948Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T23:12:53.294000474Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=759.646µs grafana | logger=migrator t=2025-06-13T23:12:53.298334343Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T23:12:53.299266547Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=931.015µs grafana | logger=migrator t=2025-06-13T23:12:53.302526044Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-13T23:12:53.303429837Z level=info msg="Migration successfully executed" id="create anon_device table" duration=903.143µs grafana | logger=migrator t=2025-06-13T23:12:53.308523842Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-13T23:12:53.309715909Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.191727ms grafana | logger=migrator t=2025-06-13T23:12:53.313022508Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-13T23:12:53.314165323Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.141635ms grafana | logger=migrator t=2025-06-13T23:12:53.319035457Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-13T23:12:53.319879958Z level=info msg="Migration successfully executed" id="create signing_key table" duration=843.9µs grafana | logger=migrator t=2025-06-13T23:12:53.325331919Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-13T23:12:53.327683222Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.349743ms grafana | logger=migrator t=2025-06-13T23:12:53.332766747Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-13T23:12:53.334538592Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.771086ms grafana | logger=migrator t=2025-06-13T23:12:53.33825512Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-13T23:12:53.338535384Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=280.514µs grafana | logger=migrator t=2025-06-13T23:12:53.348597797Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-13T23:12:53.360989392Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.401875ms grafana | logger=migrator t=2025-06-13T23:12:53.364753093Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-13T23:12:53.365367043Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=614.13µs grafana | logger=migrator t=2025-06-13T23:12:53.36925891Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T23:12:53.369346894Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=87.964µs grafana | logger=migrator t=2025-06-13T23:12:53.374559244Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T23:12:53.376606173Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.051108ms grafana | logger=migrator t=2025-06-13T23:12:53.383051902Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T23:12:53.383108685Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=56.903µs grafana | logger=migrator t=2025-06-13T23:12:53.386983071Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T23:12:53.388337966Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.353945ms grafana | logger=migrator t=2025-06-13T23:12:53.393014661Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T23:12:53.394259111Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.242719ms grafana | logger=migrator t=2025-06-13T23:12:53.400223947Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T23:12:53.402264375Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.038508ms grafana | logger=migrator t=2025-06-13T23:12:53.407714127Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-13T23:12:53.409663601Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.949994ms grafana | logger=migrator t=2025-06-13T23:12:53.414564616Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-13T23:12:53.415436698Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=869.182µs grafana | logger=migrator t=2025-06-13T23:12:53.42192622Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-13T23:12:53.422282837Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=356.427µs grafana | logger=migrator t=2025-06-13T23:12:53.425941113Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-13T23:12:53.426655237Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=713.654µs grafana | logger=migrator t=2025-06-13T23:12:53.431608035Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-13T23:12:53.432662235Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.05308ms grafana | logger=migrator t=2025-06-13T23:12:53.437697657Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-13T23:12:53.439448371Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.749684ms grafana | logger=migrator t=2025-06-13T23:12:53.446068069Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-13T23:12:53.456089261Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.022192ms grafana | logger=migrator t=2025-06-13T23:12:53.461249209Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-13T23:12:53.470589837Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.339358ms grafana | logger=migrator t=2025-06-13T23:12:53.475521354Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-13T23:12:53.483264946Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.743012ms grafana | logger=migrator t=2025-06-13T23:12:53.487199245Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-13T23:12:53.496600327Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.400532ms grafana | logger=migrator t=2025-06-13T23:12:53.500490594Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-13T23:12:53.500977027Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=469.882µs grafana | logger=migrator t=2025-06-13T23:12:53.506067332Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-13T23:12:53.508352392Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.285009ms grafana | logger=migrator t=2025-06-13T23:12:53.515058074Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-13T23:12:53.525246313Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.183379ms grafana | logger=migrator t=2025-06-13T23:12:53.529852924Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-13T23:12:53.530123457Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=273.513µs grafana | logger=migrator t=2025-06-13T23:12:53.534782831Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-13T23:12:53.53601553Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.232429ms grafana | logger=migrator t=2025-06-13T23:12:53.540930587Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:53.566730726Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.80122ms grafana | logger=migrator t=2025-06-13T23:12:53.572061012Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-13T23:12:53.572803028Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=740.886µs grafana | logger=migrator t=2025-06-13T23:12:53.578202767Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:53.580094408Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.890461ms grafana | logger=migrator t=2025-06-13T23:12:53.585389342Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:53.585729239Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=339.077µs grafana | logger=migrator t=2025-06-13T23:12:53.594600695Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:53.595912978Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.312273ms grafana | logger=migrator t=2025-06-13T23:12:53.600067807Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T23:12:53.626197283Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.127055ms grafana | logger=migrator t=2025-06-13T23:12:53.630207815Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-13T23:12:53.630869837Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=661.232µs grafana | logger=migrator t=2025-06-13T23:12:53.635387284Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-13T23:12:53.636610743Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.221309ms grafana | logger=migrator t=2025-06-13T23:12:53.641741089Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-13T23:12:53.642246144Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=504.235µs grafana | logger=migrator t=2025-06-13T23:12:53.648281444Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-13T23:12:53.650227677Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.939984ms grafana | logger=migrator t=2025-06-13T23:12:53.655420357Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-13T23:12:53.671759351Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=16.337225ms grafana | logger=migrator t=2025-06-13T23:12:53.677253315Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-13T23:12:53.686037707Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=8.783802ms grafana | logger=migrator t=2025-06-13T23:12:53.689867301Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-13T23:12:53.6998383Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.969809ms grafana | logger=migrator t=2025-06-13T23:12:53.708106478Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-13T23:12:53.715007039Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.898802ms grafana | logger=migrator t=2025-06-13T23:12:53.719678083Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-13T23:12:53.729282675Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.603942ms grafana | logger=migrator t=2025-06-13T23:12:53.734980399Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-13T23:12:53.745170818Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.18978ms grafana | logger=migrator t=2025-06-13T23:12:53.750077324Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-13T23:12:53.750887883Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=811.669µs grafana | logger=migrator t=2025-06-13T23:12:53.755011911Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-13T23:12:53.79829293Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=43.271448ms grafana | logger=migrator t=2025-06-13T23:12:53.803344103Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-13T23:12:53.812940554Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.596191ms grafana | logger=migrator t=2025-06-13T23:12:53.816567038Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-13T23:12:53.82411364Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.545742ms grafana | logger=migrator t=2025-06-13T23:12:53.831469534Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-13T23:12:53.844166424Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=12.69645ms grafana | logger=migrator t=2025-06-13T23:12:53.847587798Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-13T23:12:53.85449105Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=6.901692ms grafana | logger=migrator t=2025-06-13T23:12:53.860634685Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-13T23:12:53.860654286Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=29.712µs grafana | logger=migrator t=2025-06-13T23:12:53.866426973Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-13T23:12:53.866457615Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=31.121µs grafana | logger=migrator t=2025-06-13T23:12:53.87010558Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:53.88176111Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.656ms grafana | logger=migrator t=2025-06-13T23:12:53.885272288Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:53.896862455Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.590577ms grafana | logger=migrator t=2025-06-13T23:12:53.902977679Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-13T23:12:53.903353157Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=375.158µs grafana | logger=migrator t=2025-06-13T23:12:53.906360481Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-13T23:12:53.906586652Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=226.041µs grafana | logger=migrator t=2025-06-13T23:12:53.908951326Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:53.918686554Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=9.735148ms grafana | logger=migrator t=2025-06-13T23:12:53.924246051Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:53.932840934Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.593722ms grafana | logger=migrator t=2025-06-13T23:12:53.93859887Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T23:12:53.948572639Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.972709ms grafana | logger=migrator t=2025-06-13T23:12:53.952385172Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T23:12:53.96003838Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.653308ms grafana | logger=migrator t=2025-06-13T23:12:53.96356906Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-13T23:12:53.964228581Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=659.041µs grafana | logger=migrator t=2025-06-13T23:12:53.973618402Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:53.983705727Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=10.090715ms grafana | logger=migrator t=2025-06-13T23:12:53.988468916Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:54.001411958Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=12.944682ms grafana | logger=migrator t=2025-06-13T23:12:54.005057804Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-13T23:12:54.005267574Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=209.19µs grafana | logger=migrator t=2025-06-13T23:12:54.009888926Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-13T23:12:54.01120932Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=1.361476ms grafana | logger=migrator t=2025-06-13T23:12:54.017301023Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-13T23:12:54.019202795Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.901712ms grafana | logger=migrator t=2025-06-13T23:12:54.026422753Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-13T23:12:54.026454194Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=33.981µs grafana | logger=migrator t=2025-06-13T23:12:54.038683503Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-13T23:12:54.038712415Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=36.121µs grafana | logger=migrator t=2025-06-13T23:12:54.04276219Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-13T23:12:54.043340077Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=577.267µs grafana | logger=migrator t=2025-06-13T23:12:54.04857537Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:54.060556497Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.981027ms grafana | logger=migrator t=2025-06-13T23:12:54.06541239Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:54.075265075Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.850005ms grafana | logger=migrator t=2025-06-13T23:12:54.082298324Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-13T23:12:54.083273401Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=979.227µs grafana | logger=migrator t=2025-06-13T23:12:54.08970454Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-13T23:12:54.091633993Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.929113ms grafana | logger=migrator t=2025-06-13T23:12:54.097523907Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-13T23:12:54.10776537Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.240783ms grafana | logger=migrator t=2025-06-13T23:12:54.113104437Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:54.1225003Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.395493ms grafana | logger=migrator t=2025-06-13T23:12:54.126270941Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-13T23:12:54.126294602Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-13T23:12:54.126511273Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-13T23:12:54.126530444Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=259.383µs grafana | logger=migrator t=2025-06-13T23:12:54.131124395Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-13T23:12:54.131760956Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=635.74µs grafana | logger=migrator t=2025-06-13T23:12:54.136528185Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T23:12:54.137713732Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.185137ms grafana | logger=migrator t=2025-06-13T23:12:54.141502145Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-13T23:12:54.142787537Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.284762ms grafana | logger=migrator t=2025-06-13T23:12:54.146778769Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-13T23:12:54.148328223Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.548044ms grafana | logger=migrator t=2025-06-13T23:12:54.154612316Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-13T23:12:54.156677856Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.065539ms grafana | logger=migrator t=2025-06-13T23:12:54.160841156Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:54.170665319Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.826293ms grafana | logger=migrator t=2025-06-13T23:12:54.174507714Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:54.184790029Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=10.281055ms grafana | logger=migrator t=2025-06-13T23:12:54.188276897Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-13T23:12:54.198067259Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.789442ms grafana | logger=migrator t=2025-06-13T23:12:54.20265103Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-13T23:12:54.212743606Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.093836ms grafana | logger=migrator t=2025-06-13T23:12:54.216084327Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-13T23:12:54.21636016Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-13T23:12:54.21637388Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=289.504µs grafana | logger=migrator t=2025-06-13T23:12:54.219675959Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-13T23:12:54.220579863Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=903.614µs grafana | logger=migrator t=2025-06-13T23:12:54.224796626Z level=info msg="migrations completed" performed=654 skipped=0 duration=4.682351833s grafana | logger=migrator t=2025-06-13T23:12:54.225551792Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-13T23:12:54.243267376Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-13T23:12:54.243604112Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-13T23:12:54.250184549Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T23:12:54.345852986Z level=info msg="Restored cache from database" duration=541.256µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.354569946Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-13T23:12:54.354587217Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-13T23:12:54.362106899Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-13T23:12:54.366355943Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=4.240024ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.370480902Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-13T23:12:54.370571876Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=91.564µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.374966748Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-13T23:12:54.375322515Z level=info msg="Migration successfully executed" id="drop table resource" duration=355.107µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.379967839Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-13T23:12:54.381767966Z level=info msg="Migration successfully executed" id="create table resource" duration=1.794106ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.385523566Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-13T23:12:54.387774125Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=2.250149ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.391521475Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.391804839Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=283.374µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.396945586Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.398122323Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.175367ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.401716286Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-13T23:12:54.403107733Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.390597ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.406443624Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-13T23:12:54.407712105Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.270251ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.41196519Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-13T23:12:54.412140198Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=173.038µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.41550627Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-13T23:12:54.416452666Z level=info msg="Migration successfully executed" id="create table resource_version" duration=945.046µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.42048939Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-13T23:12:54.421818314Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.326754ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.4279559Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-13T23:12:54.428114477Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=158.427µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.433693606Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-13T23:12:54.435382678Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.687881ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.43980424Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-13T23:12:54.441852799Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.046899ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.446781446Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-13T23:12:54.448014366Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.23265ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.45371422Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.467126046Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.411826ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.470714269Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-13T23:12:54.479179717Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.463838ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.483443242Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-13T23:12:54.484735074Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.291732ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.490140255Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-13T23:12:54.492111059Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.969515ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.49565201Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.508122761Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.47101ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.511421739Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-13T23:12:54.519048257Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=7.622707ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.524636726Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-13T23:12:54.524669977Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-13T23:12:54.525119809Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=490.694µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.528533804Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-13T23:12:54.530029846Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.494512ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.534390976Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.547289517Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.898482ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.552736259Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-13T23:12:54.553658203Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=921.404µs grafana | logger=resource-migrator t=2025-06-13T23:12:54.557556671Z level=info msg="migrations completed" performed=26 skipped=0 duration=195.497595ms grafana | logger=resource-migrator t=2025-06-13T23:12:54.558739098Z level=info msg="Unlocking database" grafana | t=2025-06-13T23:12:54.559138697Z level=info caller=logger.go:214 time=2025-06-13T23:12:54.559104886Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-13T23:12:54.5722706Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-13T23:12:54.615632438Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-13T23:12:54.61566815Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-13T23:12:54.615739173Z level=info msg="Plugins loaded" count=53 duration=43.469653ms grafana | logger=query_data t=2025-06-13T23:12:54.621763433Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-13T23:12:54.627290489Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T23:12:54.64266752Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-13T23:12:54.653120253Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-13T23:12:54.653147075Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-13T23:12:54.657217731Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-13T23:12:54.657729815Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T23:12:54.658061021Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2025-06-13T23:12:54.658741084Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:54.66136437Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=http.server t=2025-06-13T23:12:54.675756343Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2025-06-13T23:12:54.756328074Z level=info msg="Update check succeeded" duration=97.268735ms grafana | logger=grafana.update.checker t=2025-06-13T23:12:54.760436672Z level=info msg="Update check succeeded" duration=101.853175ms grafana | logger=ngalert.state.manager t=2025-06-13T23:12:54.775177451Z level=info msg="State cache has been initialized" states=0 duration=117.448576ms grafana | logger=ngalert.scheduler t=2025-06-13T23:12:54.775244365Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-13T23:12:54.775315708Z level=info msg=starting first_tick=2025-06-13T23:13:00Z grafana | logger=provisioning.datasources t=2025-06-13T23:12:54.779646937Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.79135007Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.801352432Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.802510668Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=provisioning.alerting t=2025-06-13T23:12:54.826393038Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-13T23:12:54.82643192Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-13T23:12:54.853931514Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T23:12:54.928163879Z level=info msg="Patterns update finished" duration=135.153029ms grafana | logger=plugin.installer t=2025-06-13T23:12:55.014542249Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-13T23:12:55.072742732Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-13T23:12:55.107339108Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.107371849Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=445.984188ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.107396151Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.297882154Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.298612309Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.29925715Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.304618188Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.305818436Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.306435906Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.307272616Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=plugin.installer t=2025-06-13T23:12:55.307504257Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.308356268Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.310374226Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-13T23:12:55.370875729Z level=info msg="app registry initialized" grafana | logger=installer.fs t=2025-06-13T23:12:55.385331215Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-13T23:12:55.402492412Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.402517363Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=295.114072ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.402542174Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-13T23:12:55.585634722Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-13T23:12:55.65932554Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-13T23:12:55.679960204Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.679983575Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=277.436051ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.680008847Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-13T23:12:55.866948979Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-13T23:12:55.963248687Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-13T23:12:56.08976819Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-13T23:12:56.113640999Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:56.113669881Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=433.655654ms grafana | logger=infra.usagestats t=2025-06-13T23:14:46.667471173Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-13 23:12:48,107] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,108] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,112] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,115] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 23:12:48,120] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 23:12:48,127] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:48,155] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:48,156] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:48,166] INFO Socket connection established, initiating session, client: /172.17.0.7:49484, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:48,199] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002663f0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:48,316] INFO Session: 0x1000002663f0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:48,316] INFO EventThread shut down for session: 0x1000002663f0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-13 23:12:49,022] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-13 23:12:49,336] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 23:12:49,436] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-13 23:12:49,437] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-13 23:12:49,438] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-13 23:12:49,451] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 23:12:49,455] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,455] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,455] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,455] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,458] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 23:12:49,462] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 23:12:49,468] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:49,469] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 23:12:49,478] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:49,486] INFO Socket connection established, initiating session, client: /172.17.0.7:49486, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:49,495] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002663f0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 23:12:49,500] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 23:12:49,796] INFO Cluster ID = 47INnyWnS9aLXhUugDQvzQ (kafka.server.KafkaServer) kafka | [2025-06-13 23:12:49,800] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-13 23:12:49,849] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 23:12:49,894] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 23:12:49,929] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-13 23:12:49,933] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-13 23:12:49,947] INFO Loaded 0 logs in 18ms. (kafka.log.LogManager) kafka | [2025-06-13 23:12:49,947] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-13 23:12:49,949] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-13 23:12:49,959] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-13 23:12:50,010] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-13 23:12:50,034] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-13 23:12:50,052] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 23:12:50,099] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 23:12:50,464] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 23:12:50,468] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 23:12:50,490] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-13 23:12:50,491] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 23:12:50,491] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 23:12:50,495] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-13 23:12:50,500] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 23:12:50,524] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,525] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,527] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,528] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,542] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-13 23:12:50,568] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-13 23:12:50,592] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749856370583,1749856370583,1,0,0,72057604343267329,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-13 23:12:50,593] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 23:12:50,653] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-13 23:12:50,662] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,675] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,676] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,688] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 23:12:50,690] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:12:50,700] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,701] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:12:50,706] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,717] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 23:12:50,734] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 23:12:50,740] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 23:12:50,740] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-13 23:12:50,756] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,756] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-13 23:12:50,762] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,765] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,767] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,785] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,788] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 23:12:50,793] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,799] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-13 23:12:50,817] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-13 23:12:50,819] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,826] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-13 23:12:50,827] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,829] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-13 23:12:50,833] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-13 23:12:50,842] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-13 23:12:50,843] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 23:12:50,844] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 23:12:50,848] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 23:12:50,850] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 23:12:50,850] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 23:12:50,851] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 23:12:50,853] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 23:12:50,854] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,862] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 23:12:50,863] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 23:12:50,863] INFO Kafka startTimeMs: 1749856370852 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 23:12:50,864] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-13 23:12:50,866] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-13 23:12:50,870] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,870] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,872] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,875] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,876] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,910] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:50,959] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 23:12:51,012] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 23:12:51,020] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 23:12:55,928] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 23:12:55,928] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 23:13:19,898] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-13 23:13:19,913] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-13 23:13:19,918] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 23:13:19,920] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 23:13:19,984] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(XtISqPPyT6u0K2_KLzekQA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(g38L8b5dTFikXSBS1QHgiA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 23:13:19,986] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-13 23:13:19,989] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,991] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,999] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:19,999] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 23:13:20,016] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,177] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,182] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,182] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 23:13:20,185] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 23:13:20,185] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 23:13:20,191] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 23:13:20,201] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 23:13:20,204] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,209] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 23:13:20,258] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 23:13:20,258] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,337] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,353] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,362] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,363] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,365] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,388] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,389] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,389] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,389] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,391] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,399] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,399] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,399] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,399] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,399] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,410] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,411] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,411] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,411] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,412] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,420] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,421] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,421] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,421] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,421] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,433] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,434] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,434] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,434] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,434] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,443] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,445] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,445] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,445] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,446] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,458] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,460] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,460] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,460] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,461] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,471] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,473] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,473] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,473] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,473] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,489] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,489] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,490] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,493] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,493] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,500] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,501] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,501] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,501] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,502] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,511] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,515] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,515] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,515] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,515] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,524] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,525] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,525] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,525] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,525] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,534] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,535] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,535] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,535] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,535] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,547] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,548] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,548] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,548] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,548] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,559] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,560] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,560] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,560] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,560] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,567] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,567] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,567] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,567] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,567] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,579] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,580] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,580] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,580] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,581] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,592] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,593] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,593] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,593] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,593] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,606] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,609] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,609] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,609] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,609] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,618] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,619] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,619] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,619] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,619] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,638] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,640] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,640] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,640] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,640] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,647] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,648] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,648] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,649] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,649] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,655] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,656] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,656] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,657] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,657] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,664] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,665] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,665] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,665] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,665] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,672] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,673] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,673] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,673] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,673] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,680] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,681] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,681] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,681] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,681] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,688] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,689] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,689] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,689] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,689] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,696] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,696] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,696] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,697] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,697] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,705] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,706] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,706] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,706] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,706] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,716] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,718] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,718] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,718] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,719] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,726] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,726] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,727] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,727] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,727] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,736] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,737] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,737] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,737] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,738] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,751] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,752] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,752] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,752] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,752] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,761] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,762] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,762] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,762] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,762] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(XtISqPPyT6u0K2_KLzekQA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,772] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,773] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,773] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,773] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,773] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,779] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,779] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,779] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,779] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,779] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,793] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,794] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,794] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,794] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,794] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,803] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,804] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,804] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,804] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,804] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,815] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,816] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,817] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,817] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,817] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,827] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,827] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,827] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,828] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,828] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,840] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,841] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,841] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,842] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,842] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,851] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,852] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,852] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,852] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,852] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,863] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,864] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,864] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,864] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,864] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,876] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,876] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,877] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,877] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,877] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,888] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,889] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,891] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,891] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,891] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,900] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,901] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,901] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,901] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,901] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,907] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,908] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,908] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,908] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,908] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,916] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,917] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,917] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,917] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,917] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,923] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,924] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,924] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,924] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,924] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,937] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 23:13:20,938] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 23:13:20,938] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,938] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 23:13:20,938] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 23:13:20,951] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,953] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,961] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,961] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,961] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,970] INFO [Broker id=1] Finished LeaderAndIsr request in 762ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 23:13:20,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 23:13:20,979] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=g38L8b5dTFikXSBS1QHgiA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=XtISqPPyT6u0K2_KLzekQA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,988] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 23:13:20,988] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 23:13:21,640] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:21,655] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:21,757] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c in Empty state. Created a new member id consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:21,761] INFO [GroupCoordinator 1]: Preparing to rebalance group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c with group instance id None; client reason: need to re-join with the given member-id: consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:21,784] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a84e6b67-b24e-431d-be69-da7e7df84a86 in Empty state. Created a new member id consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:21,789] INFO [GroupCoordinator 1]: Preparing to rebalance group a84e6b67-b24e-431d-be69-da7e7df84a86 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 with group instance id None; client reason: need to re-join with the given member-id: consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,668] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,692] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,761] INFO [GroupCoordinator 1]: Stabilized group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,774] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c for group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,791] INFO [GroupCoordinator 1]: Stabilized group a84e6b67-b24e-431d-be69-da7e7df84a86 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 23:13:24,796] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 for group a84e6b67-b24e-431d-be69-da7e7df84a86 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.7:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-13T23:13:20.744+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-13T23:13:20.926+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T23:13:20.979+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401145 policy-apex-pdp | [2025-06-13T23:13:21.150+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-1, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-13T23:13:21.171+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-13T23:13:21.171+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-13T23:13:21.172+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-13T23:13:21.192+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T23:13:21.192+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401206 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-13T23:13:21.208+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44175417-ba30-47bc-8bc5-2a04b742873a, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-13T23:13:21.221+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.gzip.level = -1 policy-apex-pdp | compression.lz4.level = 9 policy-apex-pdp | compression.type = none policy-apex-pdp | compression.zstd.level = 3 policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-13T23:13:21.222+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-13T23:13:21.234+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401255 policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44175417-ba30-47bc-8bc5-2a04b742873a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-13T23:13:21.276+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-13T23:13:21.277+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-13T23:13:21.294+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-13T23:13:21.297+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4ff4360d-c2f6-4efb-9190-1bac5d7ac675","timestampMs":1749856401281,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-13T23:13:21.630+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-apex-pdp | [2025-06-13T23:13:21.726+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-apex-pdp | [2025-06-13T23:13:21.726+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-apex-pdp | [2025-06-13T23:13:21.727+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-13T23:13:21.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-13T23:13:21.738+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] (Re-)joining group policy-apex-pdp | [2025-06-13T23:13:21.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Request joining group due to: need to re-join with the given member-id: consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c policy-apex-pdp | [2025-06-13T23:13:21.759+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] (Re-)joining group policy-apex-pdp | [2025-06-13T23:13:22.183+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-13T23:13:22.184+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-13T23:13:24.763+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c', protocol='range'} policy-apex-pdp | [2025-06-13T23:13:24.771+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Finished assignment for group at generation 1: {consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-13T23:13:24.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c', protocol='range'} policy-apex-pdp | [2025-06-13T23:13:24.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-13T23:13:24.780+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-13T23:13:24.793+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-13T23:13:24.814+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-13T23:13:41.280+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T23:13:41.308+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T23:13:41.311+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:13:41.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.464+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-13T23:13:41.465+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T23:13:41.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.484+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-13T23:13:41.485+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:13:41.491+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.491+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:13:41.532+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.534+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.542+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.542+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:13:41.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.569+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:13:41.578+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:13:49.214+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.1 - - [13/Jun/2025:23:13:49 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-apex-pdp | [2025-06-13T23:13:56.127+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.3 - policyadmin [13/Jun/2025:23:13:56 +0000] "GET /metrics HTTP/1.1" 200 2052 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-13T23:14:09.253+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [13/Jun/2025:23:14:09 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" policy-apex-pdp | [2025-06-13T23:14:56.076+00:00|INFO|RequestLog|qtp1089680530-28] 172.17.0.3 - policyadmin [13/Jun/2025:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 2065 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-13T23:15:41.465+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:15:41.478+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-13T23:15:41.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-13T23:15:56.079+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.3 - policyadmin [13/Jun/2025:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 2075 "" "Prometheus/3.4.1" policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-13T23:12:58.284+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-13T23:12:58.359+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 30 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-13T23:12:58.360+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-13T23:12:59.776+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-13T23:12:59.933+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 146 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-13T23:13:00.567+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-13T23:13:00.580+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T23:13:00.581+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-13T23:13:00.581+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-13T23:13:00.624+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-13T23:13:00.624+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2207 ms policy-api | [2025-06-13T23:13:00.913+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-13T23:13:00.990+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-13T23:13:01.035+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-13T23:13:01.396+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-13T23:13:01.437+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-13T23:13:01.630+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1ab21633 policy-api | [2025-06-13T23:13:01.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-13T23:13:01.705+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-13T23:13:03.689+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-13T23:13:03.693+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-13T23:13:04.353+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-13T23:13:05.305+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-13T23:13:06.394+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-13T23:13:06.438+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-13T23:13:07.106+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-13T23:13:07.234+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T23:13:07.266+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-13T23:13:07.290+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.678 seconds (process running for 10.24) policy-api | [2025-06-13T23:13:39.924+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-13T23:13:39.925+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-13T23:13:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms policy-api | [2025-06-13T23:14:54.692+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.5) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.5) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.629799 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.678953 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.730923 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.780219 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.830839 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.882283 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.934401 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.986604 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.048658 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.102621 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.157818 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.206595 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.259683 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.313062 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.357615 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.404823 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.456408 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.500029 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.55267 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.603933 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.654374 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.707003 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.755557 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.80579 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.856509 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.90995 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.96676 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.015756 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.06485 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.12301 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.169482 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.21659 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.261942 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.305173 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.357031 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.408054 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.464093 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.516046 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.575641 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.631804 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.686071 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.746564 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.795025 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.845206 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.900518 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.961317 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.007093 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.059166 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.114066 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.16434 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.222838 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.28342 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.330706 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.391105 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.446722 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.502519 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.556394 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.606871 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.656448 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.708768 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.768925 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.821729 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.874511 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.933175 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.992591 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.05308 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.108557 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.171031 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.220815 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.266398 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.317989 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.363864 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.411905 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.467377 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.518551 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.566848 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.616006 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.662873 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.719958 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.76503 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.819396 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.872888 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.924111 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.985431 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.034139 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.085729 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.138293 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.190876 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.241292 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.286247 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.336116 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.388584 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.437127 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.491259 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.544381 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.602898 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.660749 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.712985 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.768339 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.821405 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.881459 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.92843 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.968838 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.020008 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.070606 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.129699 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.185303 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.240004 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.292709 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.352666 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.409495 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.45721 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.51233 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.563154 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.622945 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.680398 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.736595 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.790428 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306252312451100u | 1 | 2025-06-13 23:12:51.837487 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:51.891064 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:51.946559 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:52.006515 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:52.063242 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.117532 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.167397 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.218837 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:52.906737 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:52.967887 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.027052 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.089636 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.14734 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.203471 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.258417 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.31186 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.370185 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.425815 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.479792 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.530805 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.584049 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.636411 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.694231 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.753536 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.804019 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.858749 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.913061 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.966506 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:54.023425 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306252312521600u | 1 | 2025-06-13 23:12:54.070627 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306252312521600u | 1 | 2025-06-13 23:12:54.123484 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306252312521601u | 1 | 2025-06-13 23:12:54.176313 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306252312521601u | 1 | 2025-06-13 23:12:54.223251 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.281249 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.332158 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.388734 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.447169 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.503629 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.556148 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.615559 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.671059 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.723289 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.780685 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.834724 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.878368 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+--------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306252312551600u | 1 | 2025-06-13 23:12:55.55567 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306252312561600u | 1 | 2025-06-13 23:12:56.235619 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306252312561600u | 1 | 2025-06-13 23:12:56.302102 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-13T23:13:09.890+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 51 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-13T23:13:09.892+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-13T23:13:11.392+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-13T23:13:11.487+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-13T23:13:12.481+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-13T23:13:12.495+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T23:13:12.497+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-13T23:13:12.497+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-13T23:13:12.577+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-13T23:13:12.577+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2624 ms policy-pap | [2025-06-13T23:13:13.072+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-13T23:13:13.155+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-13T23:13:13.205+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-13T23:13:13.635+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-13T23:13:13.686+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-13T23:13:13.916+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd policy-pap | [2025-06-13T23:13:13.920+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-13T23:13:14.029+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-13T23:13:16.047+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-13T23:13:16.052+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-13T23:13:17.350+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = a84e6b67-b24e-431d-be69-da7e7df84a86 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T23:13:17.404+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856397537 policy-pap | [2025-06-13T23:13:17.542+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-1, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T23:13:17.543+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T23:13:17.544+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856397552 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T23:13:17.884+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-13T23:13:18.012+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-13T23:13:18.089+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-13T23:13:18.304+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-13T23:13:19.133+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-13T23:13:19.265+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T23:13:19.292+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-13T23:13:19.316+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-13T23:13:19.316+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-13T23:13:19.318+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-13T23:13:19.318+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-13T23:13:19.320+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@76ec6ae0 policy-pap | [2025-06-13T23:13:19.332+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T23:13:19.333+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = a84e6b67-b24e-431d-be69-da7e7df84a86 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T23:13:19.333+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:19.341+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399341 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@48a5ef5c policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T23:13:19.344+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T23:13:19.344+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399350 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d5491ea2-aac8-4b21-abb9-b5cce523dbc1, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T23:13:19.366+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T23:13:19.367+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:19.386+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399403 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d5491ea2-aac8-4b21-abb9-b5cce523dbc1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c40c9331-4a56-4369-9ae1-ba78201c3bfa, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T23:13:19.405+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T23:13:19.405+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T23:13:19.406+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399410 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c40c9331-4a56-4369-9ae1-ba78201c3bfa, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-13T23:13:19.413+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-13T23:13:19.413+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-13T23:13:19.414+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-13T23:13:19.416+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-13T23:13:19.417+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-13T23:13:19.417+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.407 seconds (process running for 10.965) policy-pap | [2025-06-13T23:13:19.870+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-pap | [2025-06-13T23:13:19.870+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-pap | [2025-06-13T23:13:19.874+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T23:13:19.874+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-pap | [2025-06-13T23:13:19.925+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-13T23:13:19.925+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-13T23:13:19.969+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:19.970+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 47INnyWnS9aLXhUugDQvzQ policy-pap | [2025-06-13T23:13:20.106+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T23:13:20.112+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:20.354+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:20.385+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:20.818+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:20.872+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T23:13:21.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T23:13:21.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T23:13:21.646+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 policy-pap | [2025-06-13T23:13:21.646+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T23:13:21.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T23:13:21.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] (Re-)joining group policy-pap | [2025-06-13T23:13:21.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Request joining group due to: need to re-join with the given member-id: consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 policy-pap | [2025-06-13T23:13:21.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] (Re-)joining group policy-pap | [2025-06-13T23:13:24.672+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6', protocol='range'} policy-pap | [2025-06-13T23:13:24.681+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T23:13:24.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6', protocol='range'} policy-pap | [2025-06-13T23:13:24.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T23:13:24.708+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T23:13:24.722+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T23:13:24.735+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T23:13:24.792+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5', protocol='range'} policy-pap | [2025-06-13T23:13:24.793+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Finished assignment for group at generation 1: {consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T23:13:24.808+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5', protocol='range'} policy-pap | [2025-06-13T23:13:24.809+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T23:13:24.809+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T23:13:24.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T23:13:24.813+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T23:13:41.318+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-13T23:13:41.319+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T23:13:41.319+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T23:13:41.329+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting listener policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting timer policy-pap | [2025-06-13T23:13:41.405+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] policy-pap | [2025-06-13T23:13:41.406+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting enqueue policy-pap | [2025-06-13T23:13:41.407+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate started policy-pap | [2025-06-13T23:13:41.406+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] policy-pap | [2025-06-13T23:13:41.412+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.453+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.454+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T23:13:41.455+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.455+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T23:13:41.484+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T23:13:41.486+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T23:13:41.486+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T23:13:41.491+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 174671f0-1735-4888-97b7-4da8a7d88fa3 policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping enqueue policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping timer policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping listener policy-pap | [2025-06-13T23:13:41.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopped policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate successful policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 start publishing next request policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting listener policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting timer policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting enqueue policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.522+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange started policy-pap | [2025-06-13T23:13:41.534+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.534+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T23:13:41.542+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.543+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f02a73d5-f851-430e-8ac3-44980b8e59ce policy-pap | [2025-06-13T23:13:41.554+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.554+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping enqueue policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping timer policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping listener policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopped policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange successful policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 start publishing next request policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting listener policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting timer policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=e476a123-aac9-4977-b04d-80df5d07a19a, expireMs=1749856451556] policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting enqueue policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate started policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.566+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.566+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T23:13:41.568+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.568+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T23:13:41.579+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e476a123-aac9-4977-b04d-80df5d07a19a policy-pap | [2025-06-13T23:13:41.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping enqueue policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping timer policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e476a123-aac9-4977-b04d-80df5d07a19a, expireMs=1749856451556] policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping listener policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopped policy-pap | [2025-06-13T23:13:41.585+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate successful policy-pap | [2025-06-13T23:13:41.585+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 has no more requests policy-pap | [2025-06-13T23:13:41.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-13T23:13:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-13T23:13:41.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 2 ms policy-pap | [2025-06-13T23:14:11.406+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] policy-pap | [2025-06-13T23:14:11.520+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] policy-pap | [2025-06-13T23:15:16.773+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-13T23:15:16.780+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-13T23:15:17.165+00:00|INFO|SessionData|http-nio-6969-exec-9] unknown group testGroup policy-pap | [2025-06-13T23:15:17.783+00:00|INFO|SessionData|http-nio-6969-exec-9] create cached group testGroup policy-pap | [2025-06-13T23:15:17.783+00:00|INFO|SessionData|http-nio-6969-exec-9] creating DB group testGroup policy-pap | [2025-06-13T23:15:18.256+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2025-06-13T23:15:18.551+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T23:15:18.639+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-13T23:15:18.639+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup policy-pap | [2025-06-13T23:15:18.640+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup policy-pap | [2025-06-13T23:15:18.653+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T23:15:18Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T23:15:18Z, user=policyadmin)] policy-pap | [2025-06-13T23:15:19.334+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2025-06-13T23:15:19.346+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T23:15:19Z, user=policyadmin)] policy-pap | [2025-06-13T23:15:19.417+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group defaultGroup policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup policy-pap | [2025-06-13T23:15:19.735+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup policy-pap | [2025-06-13T23:15:19.743+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T23:15:19Z, user=policyadmin)] policy-pap | [2025-06-13T23:15:20.271+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2025-06-13T23:15:20.273+00:00|INFO|SessionData|http-nio-6969-exec-3] deleting DB group testGroup policy-pap | [2025-06-13T23:15:41.477+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-13T23:15:41.478+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T23:15:41.479+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-13 23:12:42.330 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 23:12:42.332 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 23:12:42.338 UTC [52] LOG: database system was shut down at 2025-06-13 23:12:41 UTC postgres | 2025-06-13 23:12:42.344 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | CREATE DATABASE postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | GRANT postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-13 23:12:43.671 UTC [49] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-13 23:12:43.673 UTC [49] LOG: aborting any active transactions postgres | 2025-06-13 23:12:43.675 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-13 23:12:43.675 UTC [50] LOG: shutting down postgres | 2025-06-13 23:12:43.677 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-13 23:12:44.061 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.303 s, sync=0.075 s, total=0.386 s; sync files=1788, longest=0.008 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-13 23:12:44.073 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-13 23:12:44.202 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 23:12:44.208 UTC [102] LOG: database system was shut down at 2025-06-13 23:12:44 UTC postgres | 2025-06-13 23:12:44.215 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-13T23:12:45.553Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-13T23:12:45.559Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-13T23:12:45.560Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-13T23:12:45.562Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-13T23:12:45.562Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.96µs prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-13T23:12:45.570Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=933.105µs prometheus | time=2025-06-13T23:12:45.570Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=192.029µs wal_replay_duration=977.597µs wbl_replay_duration=470ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.96µs total_replay_duration=1.921302ms prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.62µs remote_storage=2.981µs web_handler=920ns query_engine=1.58µs scrape=331.266µs scrape_sd=253.672µs notify=153.597µs notify_sd=32.742µs rules=1.91µs tracing=8.47µs filename=/etc/prometheus/prometheus.yml totalDuration=1.604488ms prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-13 23:12:42,966 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-13 23:12:43,040 INFO org.onap.policy.models.simulators starting simulator | 2025-06-13 23:12:43,040 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-13 23:12:43,261 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-13 23:12:43,262 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-13 23:12:43,500 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 23:12:43,513 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 23:12:43,515 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 23:12:43,524 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 23:12:43,585 INFO Session workerName=node0 simulator | 2025-06-13 23:12:43,606 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-13 23:12:44,283 INFO Using GSON for REST calls simulator | 2025-06-13 23:12:44,348 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-13 23:12:44,358 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-13 23:12:44,359 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @1922ms simulator | 2025-06-13 23:12:44,359 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4156 ms. simulator | 2025-06-13 23:12:44,370 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-13 23:12:44,378 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 23:12:44,378 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 23:12:44,388 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 23:12:44,391 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 23:12:44,413 INFO Session workerName=node0 simulator | 2025-06-13 23:12:44,416 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-13 23:12:44,477 INFO Using GSON for REST calls simulator | 2025-06-13 23:12:44,488 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-13 23:12:44,490 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-13 23:12:44,490 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @2053ms simulator | 2025-06-13 23:12:44,490 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4898 ms. simulator | 2025-06-13 23:12:44,492 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-13 23:12:44,498 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-13 23:12:44,499 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-13 23:12:44,501 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-13 23:12:44,501 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-13 23:12:44,504 INFO Session workerName=node0 simulator | 2025-06-13 23:12:44,505 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-13 23:12:44,558 INFO Using GSON for REST calls simulator | 2025-06-13 23:12:44,570 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-13 23:12:44,571 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-13 23:12:44,571 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @2134ms simulator | 2025-06-13 23:12:44,572 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4928 ms. simulator | 2025-06-13 23:12:44,573 INFO org.onap.policy.models.simulators started zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-13 23:12:46,744] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,747] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,747] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,747] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,747] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,748] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 23:12:46,749] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 23:12:46,749] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 23:12:46,749] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-13 23:12:46,750] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-13 23:12:46,750] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,750] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,750] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,750] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,750] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 23:12:46,750] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-13 23:12:46,761] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-13 23:12:46,764] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 23:12:46,764] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 23:12:46,766] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 23:12:46,773] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,773] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,773] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,777] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-13 23:12:46,777] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,777] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,778] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 23:12:46,778] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 23:12:46,781] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,781] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,782] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 23:12:46,782] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 23:12:46,782] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:46,808] INFO Logging initialized @422ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-13 23:12:46,864] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 23:12:46,864] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 23:12:46,885] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 23:12:46,919] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 23:12:46,919] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 23:12:46,920] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 23:12:46,923] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-13 23:12:46,932] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 23:12:46,943] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-13 23:12:46,944] INFO Started @563ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 23:12:46,944] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-13 23:12:46,948] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 23:12:46,949] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 23:12:46,950] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 23:12:46,954] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 23:12:46,977] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 23:12:46,978] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 23:12:46,978] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 23:12:46,978] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 23:12:46,989] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-13 23:12:46,989] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 23:12:46,995] INFO Snapshot loaded in 16 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 23:12:46,996] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 23:12:46,997] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 23:12:47,011] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-13 23:12:47,011] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-13 23:12:47,034] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-13 23:12:47,034] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-13 23:12:48,183] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-csit Stopping Container policy-apex-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container policy-pap Stopping Container simulator Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2071 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2130498470068225806.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11177215652846351191.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4698450907503617039.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14460945584074144555.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9487861863639614318tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7287300876831309683.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15432271917607787771.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3882028246046477712.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13413713206687790454.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins5740275028267789400.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/2101 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-20975 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 16G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 884 23269 0 8013 30827 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:b8:4b:66 brd ff:ff:ff:ff:ff:ff inet 10.30.107.209/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85959sec preferred_lft 85959sec inet6 fe80::f816:3eff:feb8:4b66/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ce:3c:ac:97 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:ceff:fe3c:ac97/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20975) 06/13/25 _x86_64_ (8 CPU) 23:10:20 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 191.29 26.09 165.19 2396.00 92980.74 23:13:01 691.12 5.15 685.97 456.32 244251.02 23:14:01 16.93 0.10 16.83 14.00 3750.84 23:15:01 218.90 0.33 218.56 30.79 33842.63 23:16:01 8.70 0.02 8.68 0.13 208.63 23:17:01 61.48 0.82 60.66 42.12 1130.82 Average: 198.06 5.42 192.65 489.94 62693.25 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 27657780 31544436 5281440 16.03 86964 4067916 2382368 7.01 1043252 3847588 2254372 23:13:01 23201416 30556152 9737804 29.56 164144 7255988 7276116 21.41 2261716 6805352 128 23:14:01 22157252 29620224 10781968 32.73 165772 7360904 8538240 25.12 3260176 6833504 212 23:15:01 21531320 29524872 11407900 34.63 206540 7798068 8877048 26.12 3447716 7211404 2096 23:16:01 21494148 29488896 11445072 34.75 206700 7798904 8959740 26.36 3486920 7208892 152 23:17:01 23862220 31590168 9077000 27.56 207488 7528452 1632200 4.80 1447256 6963152 11052 Average: 23317356 30387458 9621864 29.21 172935 6968372 6277619 18.47 2491173 6478315 378002 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 ens3 992.67 613.51 23974.53 52.56 0.00 0.00 0.00 0.00 23:12:01 lo 12.26 12.26 1.16 1.16 0.00 0.00 0.00 0.00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 veth7433cdb 0.38 0.52 0.02 0.03 0.00 0.00 0.00 0.00 23:13:01 ens3 639.06 360.64 19724.04 30.00 0.00 0.00 0.00 0.00 23:13:01 veth139f22f 0.48 0.67 0.03 0.04 0.00 0.00 0.00 0.00 23:13:01 veth45d5c1c 40.66 49.26 3.11 311.18 0.00 0.00 0.00 0.03 23:14:01 veth7433cdb 3.68 4.67 0.59 0.47 0.00 0.00 0.00 0.00 23:14:01 ens3 7.58 3.65 6.10 0.91 0.00 0.00 0.00 0.00 23:14:01 veth139f22f 10.70 11.66 2.20 1.51 0.00 0.00 0.00 0.00 23:14:01 veth45d5c1c 0.28 0.45 0.02 0.03 0.00 0.00 0.00 0.00 23:15:01 vethacb633e 0.65 0.70 1.50 0.85 0.00 0.00 0.00 0.00 23:15:01 veth7433cdb 3.25 4.78 0.53 0.37 0.00 0.00 0.00 0.00 23:15:01 ens3 236.73 170.85 2207.93 13.48 0.00 0.00 0.00 0.00 23:15:01 veth139f22f 6.48 9.38 1.50 0.73 0.00 0.00 0.00 0.00 23:16:01 vethacb633e 1.70 1.43 0.23 1.01 0.00 0.00 0.00 0.00 23:16:01 veth7433cdb 3.28 4.73 0.54 0.37 0.00 0.00 0.00 0.00 23:16:01 ens3 1.73 1.55 0.39 0.53 0.00 0.00 0.00 0.00 23:16:01 veth139f22f 158.37 160.32 19.63 38.22 0.00 0.00 0.00 0.00 23:17:01 ens3 61.46 40.49 64.86 27.62 0.00 0.00 0.00 0.00 23:17:01 lo 27.89 27.89 2.51 2.51 0.00 0.00 0.00 0.00 23:17:01 docker0 135.80 189.79 8.55 1359.72 0.00 0.00 0.00 0.00 Average: ens3 323.22 198.46 7663.22 20.85 0.00 0.00 0.00 0.00 Average: lo 3.99 3.99 0.36 0.36 0.00 0.00 0.00 0.00 Average: docker0 22.64 31.63 1.43 226.65 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20975) 06/13/25 _x86_64_ (8 CPU) 23:10:20 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 12.88 0.00 2.55 3.26 0.04 81.26 23:12:01 0 24.67 0.00 2.92 1.00 0.05 71.35 23:12:01 1 10.97 0.00 2.09 0.43 0.03 86.47 23:12:01 2 21.00 0.00 2.77 3.32 0.07 72.85 23:12:01 3 5.27 0.00 1.91 0.22 0.03 92.58 23:12:01 4 4.36 0.00 2.78 16.93 0.03 75.90 23:12:01 5 11.88 0.00 2.40 3.62 0.05 82.06 23:12:01 6 10.63 0.00 2.72 0.10 0.05 86.50 23:12:01 7 14.21 0.00 2.79 0.50 0.02 82.48 23:13:01 all 23.33 0.00 8.33 10.30 0.07 57.96 23:13:01 0 21.75 0.00 9.18 37.68 0.08 31.31 23:13:01 1 23.01 0.00 8.27 1.77 0.07 66.89 23:13:01 2 26.20 0.00 8.89 4.03 0.08 60.80 23:13:01 3 24.75 0.00 7.49 12.70 0.05 55.01 23:13:01 4 23.12 0.00 7.19 17.95 0.08 51.65 23:13:01 5 23.69 0.00 7.42 3.73 0.07 65.09 23:13:01 6 23.87 0.00 8.08 2.70 0.08 65.26 23:13:01 7 20.19 0.00 10.18 2.05 0.07 67.52 23:14:01 all 18.74 0.00 1.51 0.16 0.05 79.54 23:14:01 0 23.57 0.00 1.62 0.00 0.07 74.75 23:14:01 1 20.19 0.00 1.53 0.00 0.05 78.24 23:14:01 2 19.14 0.00 1.75 0.05 0.07 78.99 23:14:01 3 16.16 0.00 1.11 0.02 0.07 82.65 23:14:01 4 18.27 0.00 1.62 0.00 0.03 80.07 23:14:01 5 20.77 0.00 1.52 0.00 0.07 77.65 23:14:01 6 17.07 0.00 1.51 1.19 0.05 80.18 23:14:01 7 14.72 0.00 1.44 0.03 0.07 83.74 23:15:01 all 8.95 0.00 2.42 1.42 0.06 87.16 23:15:01 0 11.14 0.00 2.66 3.42 0.07 82.72 23:15:01 1 6.80 0.00 1.94 0.50 0.05 90.71 23:15:01 2 9.47 0.00 3.44 0.44 0.05 86.60 23:15:01 3 10.31 0.00 2.21 1.16 0.07 86.26 23:15:01 4 6.18 0.00 2.08 0.94 0.07 90.73 23:15:01 5 8.31 0.00 2.59 0.17 0.05 88.88 23:15:01 6 4.55 0.00 1.61 3.03 0.05 90.76 23:15:01 7 14.80 0.00 2.77 1.71 0.05 80.67 23:16:01 all 4.03 0.00 0.37 0.03 0.04 95.52 23:16:01 0 5.54 0.00 0.40 0.00 0.05 94.01 23:16:01 1 4.39 0.00 0.33 0.00 0.03 95.25 23:16:01 2 3.07 0.00 0.47 0.02 0.05 96.39 23:16:01 3 3.41 0.00 0.33 0.13 0.05 96.07 23:16:01 4 5.26 0.00 0.33 0.02 0.02 94.38 23:16:01 5 3.61 0.00 0.38 0.00 0.07 95.94 23:16:01 6 4.11 0.00 0.38 0.02 0.03 95.46 23:16:01 7 2.89 0.00 0.33 0.00 0.03 96.75 23:17:01 all 2.35 0.00 0.74 0.10 0.03 96.77 23:17:01 0 2.30 0.00 0.75 0.23 0.03 96.69 23:17:01 1 2.45 0.00 0.70 0.05 0.03 96.77 23:17:01 2 1.50 0.00 0.69 0.15 0.03 97.63 23:17:01 3 1.52 0.00 0.72 0.07 0.03 97.66 23:17:01 4 1.82 0.00 0.69 0.03 0.03 97.42 23:17:01 5 1.79 0.00 0.75 0.20 0.03 97.23 23:17:01 6 3.34 0.00 0.68 0.03 0.03 95.91 23:17:01 7 4.08 0.00 1.00 0.07 0.02 94.84 Average: all 11.68 0.00 2.64 2.53 0.05 83.10 Average: 0 14.80 0.00 2.90 6.97 0.06 75.27 Average: 1 11.25 0.00 2.46 0.46 0.04 85.79 Average: 2 13.37 0.00 2.99 1.33 0.06 82.26 Average: 3 10.21 0.00 2.29 2.37 0.05 85.09 Average: 4 9.81 0.00 2.44 5.95 0.04 81.76 Average: 5 11.65 0.00 2.50 1.28 0.06 84.51 Average: 6 10.57 0.00 2.49 1.18 0.05 85.72 Average: 7 11.79 0.00 3.07 0.72 0.04 84.37