Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21290 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-0Dm1BNCUu2Nk/agent.2064 SSH_AGENT_PID=2066 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5957437465781684735.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5957437465781684735.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 Commit message: "Remove VFC from docker compose and helm configurations" > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8333237724516849399.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-KlSg lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-KlSg/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-KlSg/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins8520460021925922255.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins442415538987392085.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 79.5M 0 --:--:-- --:--:-- --:--:-- 79.5M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp using postgres + Grafana/Prometheus kafka Pulling simulator Pulling zookeeper Pulling postgres Pulling prometheus Pulling policy-db-migrator Pulling grafana Pulling pap Pulling api Pulling apex-pdp Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting da9db072f522 Pulling fs layer 56aca8a42329 Pulling fs layer fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer f243361b999b Pulling fs layer 7abf0dc59d35 Pulling fs layer 56aca8a42329 Waiting fbe227156a9a Waiting 991de477d40a Pulling fs layer 5efc16ba9cdc Pulling fs layer b56567b07821 Waiting 7abf0dc59d35 Waiting f243361b999b Waiting 5efc16ba9cdc Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Download complete da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 5e06c6bed798 Waiting 684be6598fc9 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer 4ba79830ebce Pulling fs layer d223479d7367 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 4ba79830ebce Waiting ece604b40811 Pulling fs layer c01e672f2391 Pulling fs layer ece604b40811 Waiting d223479d7367 Waiting c01e672f2391 Waiting 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer 5179ab305f38 Pulling fs layer 18ce86a3284e Pulling fs layer 098efa8b34b7 Pulling fs layer da9db072f522 Extracting [> ] 65.54kB/3.624MB e0a9246a993d Waiting 614e034e242f Pulling fs layer 5179ab305f38 Waiting 098efa8b34b7 Waiting 614e034e242f Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer f18232174bc9 Waiting e60d9caeb0b8 Waiting f61a19743345 Waiting 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 8af57d8c9f49 Waiting 6394804c2196 Verifying Checksum 6394804c2196 Download complete 055b9255fa03 Pulling fs layer b176d7edde70 Pulling fs layer c53a11b7c6fc Waiting 384497dbce3b Waiting e032d0a5e409 Waiting b176d7edde70 Waiting 055b9255fa03 Waiting c49e0ee60bfb Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 9fa9226be034 Waiting 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer 1617e25568b2 Waiting bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 6ac0e4adf315 Waiting 408012a7b118 Waiting 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer bf70c5107ab5 Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting 1ccde423731d Waiting 44986281b8b9 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 46eab5b44a35 Waiting c4d302cc468d Waiting e73cb4a42719 Waiting e27c75a98748 Waiting 01e0882c90d9 Waiting 12c5c803443f Waiting 2d429b9e73a6 Waiting a83b68436f09 Waiting 531ee2cf3c0c Waiting 4b82842ab819 Waiting 787d6bee9571 Waiting 7e568a0dc8fb Waiting c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 46baca71a4ef Waiting 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer e040ea11fa10 Waiting 5cfb27c10ea5 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting c0c90eeb8aca Waiting 40a5eed61bb0 Waiting 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer e444bcd4d577 Waiting eabd8714fec9 Waiting 45fd2fec8a19 Waiting 8f10199ed94b Waiting f963a77d2726 Waiting f3a82e9f1761 Waiting 79161a3f5362 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting eca0188f477e Waiting 56aca8a42329 Downloading [> ] 539.6kB/71.91MB da9db072f522 Extracting [=========> ] 655.4kB/3.624MB da9db072f522 Extracting [=========> ] 655.4kB/3.624MB da9db072f522 Extracting [=========> ] 655.4kB/3.624MB da9db072f522 Extracting [=========> ] 655.4kB/3.624MB da9db072f522 Extracting [=========> ] 655.4kB/3.624MB c124ba1a8b26 Downloading [=====> ] 9.19MB/91.87MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 56aca8a42329 Downloading [====> ] 6.487MB/71.91MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete c124ba1a8b26 Downloading [=========> ] 17.3MB/91.87MB 96e38c8865ba Downloading [=========================> ] 37.31MB/71.91MB 96e38c8865ba Downloading [=========================> ] 37.31MB/71.91MB 56aca8a42329 Downloading [=========> ] 14.06MB/71.91MB c124ba1a8b26 Downloading [================> ] 30.82MB/91.87MB 96e38c8865ba Downloading [====================================> ] 52.98MB/71.91MB 96e38c8865ba Downloading [====================================> ] 52.98MB/71.91MB 56aca8a42329 Downloading [==================> ] 26.49MB/71.91MB c124ba1a8b26 Downloading [=========================> ] 46.5MB/91.87MB 96e38c8865ba Downloading [================================================> ] 70.29MB/71.91MB 96e38c8865ba Downloading [================================================> ] 70.29MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Verifying Checksum 56aca8a42329 Downloading [=============================> ] 42.17MB/71.91MB 96e38c8865ba Download complete 96e38c8865ba Download complete fbe227156a9a Downloading [> ] 146.4kB/14.63MB c124ba1a8b26 Downloading [==================================> ] 63.26MB/91.87MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 56aca8a42329 Downloading [========================================> ] 57.85MB/71.91MB fbe227156a9a Downloading [=================> ] 5.012MB/14.63MB c124ba1a8b26 Downloading [===========================================> ] 80.56MB/91.87MB 56aca8a42329 Verifying Checksum 56aca8a42329 Download complete 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete fbe227156a9a Verifying Checksum fbe227156a9a Download complete f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Verifying Checksum f243361b999b Download complete c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 991de477d40a Download complete 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Verifying Checksum 7abf0dc59d35 Download complete 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Download complete eb7cda286a15 Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 4ba79830ebce Downloading [> ] 539.6kB/166.8MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB d223479d7367 Downloading [> ] 80.82kB/6.742MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB d223479d7367 Verifying Checksum d223479d7367 Download complete dcc0c3b2850c Downloading [=======> ] 11.89MB/76.12MB 4ba79830ebce Downloading [===> ] 11.89MB/166.8MB ece604b40811 Downloading [==================================================>] 303B/303B ece604b40811 Verifying Checksum ece604b40811 Download complete 56aca8a42329 Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB c01e672f2391 Downloading [> ] 539.6kB/263.6MB dcc0c3b2850c Downloading [=================> ] 27.03MB/76.12MB 4ba79830ebce Downloading [=======> ] 25.41MB/166.8MB 56aca8a42329 Extracting [========> ] 11.7MB/71.91MB 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB c01e672f2391 Downloading [> ] 3.243MB/263.6MB dcc0c3b2850c Downloading [============================> ] 42.71MB/76.12MB 4ba79830ebce Downloading [===========> ] 38.39MB/166.8MB 56aca8a42329 Extracting [===========> ] 16.71MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB c01e672f2391 Downloading [=> ] 6.487MB/263.6MB dcc0c3b2850c Downloading [======================================> ] 58.93MB/76.12MB 4ba79830ebce Downloading [===============> ] 52.98MB/166.8MB 56aca8a42329 Extracting [===============> ] 22.84MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB c01e672f2391 Downloading [=> ] 9.731MB/263.6MB dcc0c3b2850c Downloading [================================================> ] 74.07MB/76.12MB 4ba79830ebce Downloading [====================> ] 67.04MB/166.8MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 56aca8a42329 Extracting [===================> ] 27.85MB/71.91MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB c01e672f2391 Downloading [==> ] 15.14MB/263.6MB e0a9246a993d Downloading [> ] 539.6kB/71.91MB 4ba79830ebce Downloading [========================> ] 82.72MB/166.8MB 56aca8a42329 Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [========================> ] 35.09MB/71.91MB 96e38c8865ba Extracting [========================> ] 35.09MB/71.91MB c01e672f2391 Downloading [=====> ] 28.11MB/263.6MB 4ba79830ebce Downloading [==============================> ] 101.1MB/166.8MB 56aca8a42329 Extracting [==========================> ] 37.88MB/71.91MB 96e38c8865ba Extracting [===========================> ] 39.55MB/71.91MB 96e38c8865ba Extracting [===========================> ] 39.55MB/71.91MB c01e672f2391 Downloading [========> ] 43.25MB/263.6MB 4ba79830ebce Downloading [==================================> ] 115.7MB/166.8MB 56aca8a42329 Extracting [==============================> ] 43.45MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 4ba79830ebce Downloading [=======================================> ] 130.3MB/166.8MB 56aca8a42329 Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 4ba79830ebce Downloading [============================================> ] 148.1MB/166.8MB 56aca8a42329 Extracting [====================================> ] 52.36MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 54.03MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 54.03MB/71.91MB 56aca8a42329 Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.05MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.05MB/71.91MB 56aca8a42329 Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 96e38c8865ba Extracting [=============================================> ] 65.73MB/71.91MB 56aca8a42329 Extracting [=================================================> ] 70.75MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 56aca8a42329 Pull complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B fbe227156a9a Extracting [> ] 163.8kB/14.63MB fbe227156a9a Extracting [==> ] 655.4kB/14.63MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB e5d7009d9e55 Pull complete 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB fbe227156a9a Extracting [================> ] 4.915MB/14.63MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB fbe227156a9a Extracting [======================> ] 6.717MB/14.63MB 0d92cad902ba Pull complete d3165a332ae3 Pull complete fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [======> ] 10.03MB/76.12MB fbe227156a9a Extracting [======================================> ] 11.14MB/14.63MB c124ba1a8b26 Extracting [====> ] 8.913MB/91.87MB dcc0c3b2850c Extracting [============> ] 19.5MB/76.12MB fbe227156a9a Extracting [=========================================> ] 12.29MB/14.63MB c124ba1a8b26 Extracting [============> ] 22.28MB/91.87MB fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB dcc0c3b2850c Extracting [===================> ] 30.08MB/76.12MB fbe227156a9a Pull complete c01e672f2391 Downloading [==========> ] 55.69MB/263.6MB 4ba79830ebce Downloading [=============================================> ] 150.8MB/166.8MB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB e0a9246a993d Downloading [=> ] 2.162MB/71.91MB c124ba1a8b26 Extracting [==================> ] 34.54MB/91.87MB dcc0c3b2850c Extracting [===========================> ] 42.34MB/76.12MB 4ba79830ebce Downloading [=================================================> ] 165.4MB/166.8MB c01e672f2391 Downloading [============> ] 66.5MB/263.6MB 4ba79830ebce Verifying Checksum 4ba79830ebce Download complete e0a9246a993d Downloading [===> ] 4.865MB/71.91MB c124ba1a8b26 Extracting [==========================> ] 47.91MB/91.87MB 5179ab305f38 Downloading [==================================================>] 306B/306B 5179ab305f38 Verifying Checksum 5179ab305f38 Download complete b56567b07821 Pull complete f243361b999b Extracting [==================================================>] 5.242kB/5.242kB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 18ce86a3284e Downloading [> ] 539.6kB/182.3MB dcc0c3b2850c Extracting [================================> ] 49.58MB/76.12MB c01e672f2391 Downloading [===============> ] 80.02MB/263.6MB e0a9246a993d Downloading [=========> ] 12.98MB/71.91MB c124ba1a8b26 Extracting [==============================> ] 55.71MB/91.87MB 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 18ce86a3284e Downloading [=> ] 4.324MB/182.3MB f243361b999b Pull complete c01e672f2391 Downloading [=================> ] 93.54MB/263.6MB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB dcc0c3b2850c Extracting [======================================> ] 58.49MB/76.12MB e0a9246a993d Downloading [================> ] 23.79MB/71.91MB c124ba1a8b26 Extracting [====================================> ] 66.29MB/91.87MB 4ba79830ebce Extracting [=> ] 3.899MB/166.8MB 18ce86a3284e Downloading [==> ] 10.27MB/182.3MB c01e672f2391 Downloading [====================> ] 107.6MB/263.6MB dcc0c3b2850c Extracting [============================================> ] 67.96MB/76.12MB e0a9246a993d Downloading [========================> ] 35.14MB/71.91MB c124ba1a8b26 Extracting [========================================> ] 75.2MB/91.87MB 4ba79830ebce Extracting [==> ] 9.47MB/166.8MB 7abf0dc59d35 Pull complete 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 18ce86a3284e Downloading [====> ] 16.22MB/182.3MB c01e672f2391 Downloading [======================> ] 120.6MB/263.6MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB e0a9246a993d Downloading [================================> ] 47.04MB/71.91MB c124ba1a8b26 Extracting [=============================================> ] 83MB/91.87MB 4ba79830ebce Extracting [=====> ] 18.94MB/166.8MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 991de477d40a Pull complete 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 18ce86a3284e Downloading [======> ] 23.79MB/182.3MB c01e672f2391 Downloading [=========================> ] 136.2MB/263.6MB e0a9246a993d Downloading [===========================================> ] 63.26MB/71.91MB c124ba1a8b26 Extracting [=================================================> ] 91.36MB/91.87MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 4ba79830ebce Extracting [========> ] 29.52MB/166.8MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 18ce86a3284e Downloading [=========> ] 35.14MB/182.3MB 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Verifying Checksum 098efa8b34b7 Download complete c01e672f2391 Downloading [=============================> ] 154.1MB/263.6MB eb7cda286a15 Pull complete 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 614e034e242f Verifying Checksum 614e034e242f Download complete 4ba79830ebce Extracting [==========> ] 34.54MB/166.8MB api Pulled f18232174bc9 Downloading [> ] 48.06kB/3.642MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB 18ce86a3284e Downloading [=============> ] 49.2MB/182.3MB 5efc16ba9cdc Pull complete c01e672f2391 Downloading [===============================> ] 168.7MB/263.6MB 6394804c2196 Pull complete policy-db-migrator Pulled pap Pulled f18232174bc9 Verifying Checksum f18232174bc9 Download complete 4ba79830ebce Extracting [============> ] 42.34MB/166.8MB f18232174bc9 Extracting [> ] 65.54kB/3.642MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Verifying Checksum e60d9caeb0b8 Download complete f61a19743345 Downloading [> ] 48.06kB/3.524MB e0a9246a993d Extracting [==> ] 3.899MB/71.91MB 18ce86a3284e Downloading [=================> ] 62.72MB/182.3MB c01e672f2391 Downloading [==================================> ] 182.7MB/263.6MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Verifying Checksum f61a19743345 Download complete 4ba79830ebce Extracting [===============> ] 50.14MB/166.8MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 18ce86a3284e Downloading [=====================> ] 77.86MB/182.3MB e0a9246a993d Extracting [=====> ] 7.242MB/71.91MB c01e672f2391 Downloading [=====================================> ] 199MB/263.6MB 4ba79830ebce Extracting [==================> ] 61.83MB/166.8MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 8af57d8c9f49 Downloading [========> ] 1.473MB/8.735MB 18ce86a3284e Downloading [=========================> ] 91.37MB/182.3MB f18232174bc9 Pull complete e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B c01e672f2391 Downloading [========================================> ] 213.6MB/263.6MB e0a9246a993d Extracting [========> ] 12.26MB/71.91MB 4ba79830ebce Extracting [=====================> ] 71.86MB/166.8MB 8af57d8c9f49 Downloading [=======================> ] 4.029MB/8.735MB 18ce86a3284e Downloading [==============================> ] 110.3MB/182.3MB c01e672f2391 Downloading [===========================================> ] 230.3MB/263.6MB e0a9246a993d Extracting [===========> ] 16.15MB/71.91MB e60d9caeb0b8 Pull complete f61a19743345 Extracting [> ] 65.54kB/3.524MB 4ba79830ebce Extracting [=======================> ] 79.1MB/166.8MB 18ce86a3284e Downloading [==================================> ] 126MB/182.3MB c01e672f2391 Downloading [==============================================> ] 244.9MB/263.6MB e0a9246a993d Extracting [===============> ] 21.73MB/71.91MB 4ba79830ebce Extracting [=========================> ] 86.34MB/166.8MB f61a19743345 Extracting [====> ] 327.7kB/3.524MB 18ce86a3284e Downloading [======================================> ] 140MB/182.3MB c01e672f2391 Downloading [=================================================> ] 260.1MB/263.6MB c01e672f2391 Verifying Checksum c01e672f2391 Download complete e0a9246a993d Extracting [=================> ] 25.62MB/71.91MB 4ba79830ebce Extracting [===========================> ] 90.8MB/166.8MB f61a19743345 Extracting [================================================> ] 3.408MB/3.524MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 18ce86a3284e Downloading [=========================================> ] 152.5MB/182.3MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB e0a9246a993d Extracting [====================> ] 30.08MB/71.91MB 4ba79830ebce Extracting [============================> ] 95.81MB/166.8MB 18ce86a3284e Downloading [=============================================> ] 167.1MB/182.3MB f61a19743345 Pull complete e0a9246a993d Extracting [========================> ] 34.54MB/71.91MB 4ba79830ebce Extracting [=============================> ] 99.71MB/166.8MB 18ce86a3284e Downloading [=================================================> ] 180MB/182.3MB 18ce86a3284e Verifying Checksum 18ce86a3284e Download complete e0a9246a993d Extracting [===========================> ] 40.11MB/71.91MB 4ba79830ebce Extracting [===============================> ] 105.3MB/166.8MB e0a9246a993d Extracting [===============================> ] 45.68MB/71.91MB 4ba79830ebce Extracting [================================> ] 109.2MB/166.8MB e0a9246a993d Extracting [===================================> ] 50.69MB/71.91MB 4ba79830ebce Extracting [==================================> ] 114.2MB/166.8MB e0a9246a993d Extracting [======================================> ] 55.71MB/71.91MB 4ba79830ebce Extracting [===================================> ] 118.7MB/166.8MB e0a9246a993d Extracting [==========================================> ] 61.28MB/71.91MB 4ba79830ebce Extracting [=====================================> ] 124.2MB/166.8MB e0a9246a993d Extracting [==============================================> ] 67.4MB/71.91MB 4ba79830ebce Extracting [======================================> ] 128.7MB/166.8MB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB 4ba79830ebce Extracting [=======================================> ] 132.6MB/166.8MB e0a9246a993d Pull complete 5179ab305f38 Extracting [==================================================>] 306B/306B 5179ab305f38 Extracting [==================================================>] 306B/306B 4ba79830ebce Extracting [========================================> ] 136.5MB/166.8MB 5179ab305f38 Pull complete 4ba79830ebce Extracting [==========================================> ] 143.2MB/166.8MB 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 4ba79830ebce Extracting [=============================================> ] 150.4MB/166.8MB 18ce86a3284e Extracting [==> ] 10.58MB/182.3MB 4ba79830ebce Extracting [==============================================> ] 156.5MB/166.8MB 18ce86a3284e Extracting [======> ] 25.07MB/182.3MB 4ba79830ebce Extracting [===============================================> ] 159.9MB/166.8MB 8af57d8c9f49 Downloading [========================================> ] 7.077MB/8.735MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 18ce86a3284e Extracting [=========> ] 33.98MB/182.3MB c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB c53a11b7c6fc Verifying Checksum c53a11b7c6fc Download complete e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB e032d0a5e409 Verifying Checksum e032d0a5e409 Download complete 8af57d8c9f49 Download complete 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 055b9255fa03 Verifying Checksum 055b9255fa03 Download complete c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB b176d7edde70 Verifying Checksum b176d7edde70 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 4ba79830ebce Extracting [=================================================> ] 164.3MB/166.8MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 18ce86a3284e Extracting [===========> ] 40.11MB/182.3MB 8af57d8c9f49 Extracting [===> ] 688.1kB/8.735MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB c49e0ee60bfb Downloading [====> ] 8.65MB/107.3MB 384497dbce3b Downloading [=======> ] 9.19MB/63.48MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 8af57d8c9f49 Extracting [========================> ] 4.227MB/8.735MB 18ce86a3284e Extracting [=============> ] 49.58MB/182.3MB c49e0ee60bfb Downloading [=========> ] 20.54MB/107.3MB 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 8af57d8c9f49 Extracting [========================================> ] 7.078MB/8.735MB 18ce86a3284e Extracting [===============> ] 55.71MB/182.3MB 4ba79830ebce Pull complete d223479d7367 Extracting [> ] 98.3kB/6.742MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 1617e25568b2 Extracting [===============================================> ] 458.8kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 18ce86a3284e Extracting [==================> ] 68.52MB/182.3MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 8af57d8c9f49 Pull complete c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB d223479d7367 Extracting [====> ] 589.8kB/6.742MB 18ce86a3284e Extracting [====================> ] 75.2MB/182.3MB 1617e25568b2 Pull complete d223479d7367 Extracting [====================> ] 2.753MB/6.742MB c53a11b7c6fc Pull complete e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 18ce86a3284e Extracting [=======================> ] 85.79MB/182.3MB d223479d7367 Extracting [===============================> ] 4.227MB/6.742MB 18ce86a3284e Extracting [==========================> ] 96.93MB/182.3MB e032d0a5e409 Pull complete d223479d7367 Extracting [===========================================> ] 5.898MB/6.742MB d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 18ce86a3284e Extracting [============================> ] 105.3MB/182.3MB c49e0ee60bfb Downloading [===========> ] 25.41MB/107.3MB 384497dbce3b Downloading [===========> ] 14.6MB/63.48MB d223479d7367 Pull complete ece604b40811 Extracting [==================================================>] 303B/303B ece604b40811 Extracting [==================================================>] 303B/303B 6ac0e4adf315 Downloading [=> ] 1.621MB/62.07MB 18ce86a3284e Extracting [===============================> ] 115.9MB/182.3MB 384497dbce3b Downloading [======================> ] 28.65MB/63.48MB c49e0ee60bfb Downloading [==================> ] 40.01MB/107.3MB 6ac0e4adf315 Downloading [======> ] 7.568MB/62.07MB ece604b40811 Pull complete 18ce86a3284e Extracting [==================================> ] 127.6MB/182.3MB 384497dbce3b Downloading [===================================> ] 44.87MB/63.48MB c49e0ee60bfb Downloading [==========================> ] 57.85MB/107.3MB 6ac0e4adf315 Downloading [==========> ] 12.98MB/62.07MB 18ce86a3284e Extracting [======================================> ] 139.3MB/182.3MB 384497dbce3b Downloading [================================================> ] 61.09MB/63.48MB c49e0ee60bfb Downloading [==================================> ] 73.53MB/107.3MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete c01e672f2391 Extracting [> ] 557.1kB/263.6MB 6ac0e4adf315 Downloading [===============> ] 19.46MB/62.07MB 18ce86a3284e Extracting [=========================================> ] 151.5MB/182.3MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB c49e0ee60bfb Downloading [=========================================> ] 89.75MB/107.3MB c01e672f2391 Extracting [> ] 3.342MB/263.6MB 6ac0e4adf315 Downloading [==========================> ] 32.44MB/62.07MB 18ce86a3284e Extracting [============================================> ] 161.5MB/182.3MB f3b09c502777 Downloading [====> ] 5.406MB/56.52MB c49e0ee60bfb Downloading [================================================> ] 104.9MB/107.3MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete 6ac0e4adf315 Downloading [=====================================> ] 45.96MB/62.07MB c01e672f2391 Extracting [==> ] 14.48MB/263.6MB 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 18ce86a3284e Extracting [===============================================> ] 172.7MB/182.3MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f3b09c502777 Downloading [=============> ] 15.14MB/56.52MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Download complete 6ac0e4adf315 Downloading [================================================> ] 60.55MB/62.07MB c01e672f2391 Extracting [====> ] 22.84MB/263.6MB c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete f3b09c502777 Downloading [=========================> ] 29.2MB/56.52MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB c01e672f2391 Extracting [=====> ] 28.97MB/263.6MB c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 2d429b9e73a6 Downloading [===============> ] 9.141MB/29.13MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB f3b09c502777 Downloading [========================================> ] 45.96MB/56.52MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 18ce86a3284e Pull complete 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB f3b09c502777 Verifying Checksum f3b09c502777 Download complete c01e672f2391 Extracting [======> ] 36.77MB/263.6MB 2d429b9e73a6 Downloading [==================================> ] 20.35MB/29.13MB c49e0ee60bfb Extracting [==> ] 5.571MB/107.3MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB ed54a7dee1d8 Download complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 531ee2cf3c0c Downloading [======================================> ] 6.143MB/8.066MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete c01e672f2391 Extracting [=========> ] 47.91MB/263.6MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 6ac0e4adf315 Extracting [====> ] 6.128MB/62.07MB 098efa8b34b7 Pull complete 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB e73cb4a42719 Downloading [====> ] 10.81MB/109.1MB c01e672f2391 Extracting [==========> ] 57.38MB/263.6MB 1e017ebebdbd Downloading [==============> ] 10.55MB/37.19MB 55f2b468da67 Downloading [=> ] 8.65MB/257.9MB c49e0ee60bfb Extracting [=====> ] 11.7MB/107.3MB 6ac0e4adf315 Extracting [=======> ] 8.913MB/62.07MB 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB e73cb4a42719 Downloading [==========> ] 23.79MB/109.1MB c01e672f2391 Extracting [============> ] 65.18MB/263.6MB 1e017ebebdbd Downloading [===========================> ] 20.35MB/37.19MB 55f2b468da67 Downloading [==> ] 14.6MB/257.9MB 6ac0e4adf315 Extracting [=========> ] 11.7MB/62.07MB 614e034e242f Pull complete c49e0ee60bfb Extracting [======> ] 13.93MB/107.3MB simulator Pulled 2d429b9e73a6 Extracting [===========> ] 6.488MB/29.13MB e73cb4a42719 Downloading [================> ] 35.14MB/109.1MB c01e672f2391 Extracting [==============> ] 74.65MB/263.6MB 1e017ebebdbd Downloading [==========================================> ] 31.65MB/37.19MB 55f2b468da67 Downloading [====> ] 24.33MB/257.9MB 1e017ebebdbd Download complete 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB e73cb4a42719 Downloading [=====================> ] 45.96MB/109.1MB c01e672f2391 Extracting [===============> ] 83MB/263.6MB 55f2b468da67 Downloading [======> ] 33.52MB/257.9MB 6ac0e4adf315 Extracting [==============> ] 18.38MB/62.07MB 82bfc142787e Downloading [==========================> ] 4.619MB/8.613MB 2d429b9e73a6 Extracting [===================> ] 11.5MB/29.13MB e73cb4a42719 Downloading [===========================> ] 60.55MB/109.1MB c01e672f2391 Extracting [================> ] 88.01MB/263.6MB c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 55f2b468da67 Downloading [========> ] 41.63MB/257.9MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 2d429b9e73a6 Extracting [=========================> ] 14.75MB/29.13MB e73cb4a42719 Downloading [==================================> ] 75.15MB/109.1MB c01e672f2391 Extracting [=================> ] 94.7MB/263.6MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB c49e0ee60bfb Extracting [=========> ] 20.61MB/107.3MB 55f2b468da67 Downloading [==========> ] 51.9MB/257.9MB 6ac0e4adf315 Extracting [====================> ] 25.07MB/62.07MB b0e0ef7895f4 Downloading [==========> ] 7.536MB/37.01MB e73cb4a42719 Downloading [========================================> ] 88.67MB/109.1MB 2d429b9e73a6 Extracting [=============================> ] 17.4MB/29.13MB c01e672f2391 Extracting [===================> ] 102.5MB/263.6MB 55f2b468da67 Downloading [============> ] 63.8MB/257.9MB 1e017ebebdbd Extracting [========> ] 6.291MB/37.19MB c49e0ee60bfb Extracting [===========> ] 23.95MB/107.3MB b0e0ef7895f4 Downloading [==========================> ] 19.97MB/37.01MB e73cb4a42719 Downloading [===============================================> ] 103.8MB/109.1MB 2d429b9e73a6 Extracting [===================================> ] 20.94MB/29.13MB 6ac0e4adf315 Extracting [========================> ] 30.08MB/62.07MB c01e672f2391 Extracting [====================> ] 109.7MB/263.6MB 1e017ebebdbd Extracting [============> ] 9.044MB/37.19MB 55f2b468da67 Downloading [===============> ] 77.86MB/257.9MB c49e0ee60bfb Extracting [============> ] 27.3MB/107.3MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete b0e0ef7895f4 Downloading [==========================================> ] 31.28MB/37.01MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB 6ac0e4adf315 Extracting [===========================> ] 34.54MB/62.07MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete c01e672f2391 Extracting [=====================> ] 114.8MB/263.6MB 55f2b468da67 Downloading [=================> ] 89.75MB/257.9MB 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB c49e0ee60bfb Extracting [==============> ] 30.64MB/107.3MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 6ac0e4adf315 Extracting [==================================> ] 42.89MB/62.07MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 55f2b468da67 Downloading [===================> ] 102.7MB/257.9MB c01e672f2391 Extracting [======================> ] 119.2MB/263.6MB 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB c49e0ee60bfb Extracting [================> ] 35.09MB/107.3MB 09d5a3f70313 Downloading [====> ] 9.19MB/109.2MB eca0188f477e Downloading [==========> ] 7.912MB/37.17MB 6ac0e4adf315 Extracting [========================================> ] 50.69MB/62.07MB 2d429b9e73a6 Extracting [============================================> ] 25.95MB/29.13MB 55f2b468da67 Downloading [======================> ] 114.6MB/257.9MB c01e672f2391 Extracting [=======================> ] 126.5MB/263.6MB 1e017ebebdbd Extracting [==========================> ] 19.66MB/37.19MB c49e0ee60bfb Extracting [=================> ] 37.88MB/107.3MB 09d5a3f70313 Downloading [=========> ] 20.54MB/109.2MB eca0188f477e Downloading [=====================> ] 16.2MB/37.17MB 6ac0e4adf315 Extracting [================================================> ] 59.6MB/62.07MB 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 55f2b468da67 Downloading [========================> ] 125.4MB/257.9MB c01e672f2391 Extracting [=========================> ] 134.3MB/263.6MB 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB 09d5a3f70313 Downloading [=============> ] 30.28MB/109.2MB eca0188f477e Downloading [==================================> ] 26MB/37.17MB c49e0ee60bfb Extracting [==================> ] 40.11MB/107.3MB 55f2b468da67 Downloading [===========================> ] 140MB/257.9MB c01e672f2391 Extracting [==========================> ] 139.8MB/263.6MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 1e017ebebdbd Extracting [=================================> ] 24.77MB/37.19MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 09d5a3f70313 Downloading [===================> ] 43.25MB/109.2MB eca0188f477e Verifying Checksum eca0188f477e Download complete c49e0ee60bfb Extracting [===================> ] 41.78MB/107.3MB e444bcd4d577 Download complete 55f2b468da67 Downloading [============================> ] 149.2MB/257.9MB c01e672f2391 Extracting [===========================> ] 145.4MB/263.6MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 1e017ebebdbd Extracting [===================================> ] 26.74MB/37.19MB 09d5a3f70313 Downloading [=========================> ] 56.77MB/109.2MB eca0188f477e Extracting [> ] 393.2kB/37.17MB c49e0ee60bfb Extracting [=====================> ] 45.12MB/107.3MB 55f2b468da67 Downloading [===============================> ] 160MB/257.9MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB c01e672f2391 Extracting [============================> ] 151.5MB/263.6MB eabd8714fec9 Downloading [=> ] 8.65MB/375MB 09d5a3f70313 Downloading [===============================> ] 68.66MB/109.2MB 1e017ebebdbd Extracting [=======================================> ] 29.1MB/37.19MB eca0188f477e Extracting [====> ] 3.146MB/37.17MB c49e0ee60bfb Extracting [======================> ] 47.91MB/107.3MB 55f2b468da67 Downloading [================================> ] 169.2MB/257.9MB c01e672f2391 Extracting [=============================> ] 156.5MB/263.6MB eabd8714fec9 Downloading [==> ] 17.84MB/375MB 09d5a3f70313 Downloading [====================================> ] 80.02MB/109.2MB 1e017ebebdbd Extracting [==========================================> ] 31.85MB/37.19MB eca0188f477e Extracting [=======> ] 5.505MB/37.17MB 55f2b468da67 Downloading [===================================> ] 183.3MB/257.9MB c49e0ee60bfb Extracting [=======================> ] 51.25MB/107.3MB eabd8714fec9 Downloading [===> ] 28.65MB/375MB c01e672f2391 Extracting [===============================> ] 164.3MB/263.6MB 6ac0e4adf315 Pull complete 09d5a3f70313 Downloading [==========================================> ] 92.45MB/109.2MB eca0188f477e Extracting [==========> ] 7.471MB/37.17MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 55f2b468da67 Downloading [=====================================> ] 193.6MB/257.9MB eabd8714fec9 Downloading [=====> ] 38.93MB/375MB c01e672f2391 Extracting [================================> ] 169.9MB/263.6MB c49e0ee60bfb Extracting [========================> ] 52.92MB/107.3MB 09d5a3f70313 Downloading [===============================================> ] 103.8MB/109.2MB eca0188f477e Extracting [=============> ] 10.22MB/37.17MB 55f2b468da67 Downloading [=======================================> ] 204.9MB/257.9MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete c01e672f2391 Extracting [================================> ] 173.8MB/263.6MB c49e0ee60bfb Extracting [=========================> ] 55.15MB/107.3MB eabd8714fec9 Downloading [======> ] 48.12MB/375MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB eca0188f477e Extracting [===================> ] 14.55MB/37.17MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB c01e672f2391 Extracting [==================================> ] 182.7MB/263.6MB eabd8714fec9 Downloading [========> ] 60.55MB/375MB c49e0ee60bfb Extracting [===========================> ] 58.49MB/107.3MB 8f10199ed94b Downloading [=====================> ] 3.734MB/8.768MB eca0188f477e Extracting [========================> ] 18.09MB/37.17MB 2d429b9e73a6 Pull complete f3b09c502777 Extracting [===> ] 3.899MB/56.52MB 55f2b468da67 Downloading [=============================================> ] 234.1MB/257.9MB eabd8714fec9 Downloading [=========> ] 72.99MB/375MB c01e672f2391 Extracting [===================================> ] 188.8MB/263.6MB c49e0ee60bfb Extracting [============================> ] 60.72MB/107.3MB 8f10199ed94b Downloading [===========================================> ] 7.568MB/8.768MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete eca0188f477e Extracting [==========================> ] 20.05MB/37.17MB f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB 55f2b468da67 Downloading [===============================================> ] 246.5MB/257.9MB eabd8714fec9 Downloading [===========> ] 88.67MB/375MB c49e0ee60bfb Extracting [=============================> ] 62.95MB/107.3MB c01e672f2391 Extracting [=====================================> ] 200MB/263.6MB eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete eabd8714fec9 Downloading [=============> ] 101.6MB/375MB f3b09c502777 Extracting [========> ] 9.47MB/56.52MB c49e0ee60bfb Extracting [===============================> ] 66.85MB/107.3MB c01e672f2391 Extracting [=======================================> ] 207.2MB/263.6MB eca0188f477e Extracting [=====================================> ] 27.92MB/37.17MB eabd8714fec9 Downloading [===============> ] 117.9MB/375MB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB c49e0ee60bfb Extracting [================================> ] 69.07MB/107.3MB c01e672f2391 Extracting [========================================> ] 216.1MB/263.6MB eca0188f477e Extracting [===========================================> ] 32.64MB/37.17MB eabd8714fec9 Downloading [=================> ] 131.4MB/375MB f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB c01e672f2391 Extracting [==========================================> ] 225.6MB/263.6MB c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB eabd8714fec9 Downloading [==================> ] 141.7MB/375MB f3b09c502777 Extracting [================> ] 18.94MB/56.52MB c01e672f2391 Extracting [============================================> ] 234MB/263.6MB c49e0ee60bfb Extracting [===================================> ] 76.87MB/107.3MB eabd8714fec9 Downloading [===================> ] 149.2MB/375MB c01e672f2391 Extracting [=============================================> ] 237.3MB/263.6MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB c49e0ee60bfb Extracting [====================================> ] 77.99MB/107.3MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [=====================> ] 162.7MB/375MB c01e672f2391 Extracting [==============================================> ] 246.2MB/263.6MB c49e0ee60bfb Extracting [=====================================> ] 81.33MB/107.3MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB eabd8714fec9 Downloading [=======================> ] 173MB/375MB c49e0ee60bfb Extracting [======================================> ] 81.89MB/107.3MB f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB c01e672f2391 Extracting [===============================================> ] 250.1MB/263.6MB eabd8714fec9 Downloading [=========================> ] 189.2MB/375MB c49e0ee60bfb Extracting [========================================> ] 86.34MB/107.3MB f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB c01e672f2391 Extracting [================================================> ] 257.4MB/263.6MB eabd8714fec9 Downloading [==========================> ] 200MB/375MB c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB c49e0ee60bfb Extracting [===========================================> ] 92.47MB/107.3MB f3b09c502777 Extracting [==================================> ] 39.55MB/56.52MB 1e017ebebdbd Pull complete eabd8714fec9 Downloading [============================> ] 215.2MB/375MB f3b09c502777 Extracting [==============================================> ] 52.36MB/56.52MB c49e0ee60bfb Extracting [==============================================> ] 99.16MB/107.3MB eabd8714fec9 Downloading [==============================> ] 231.9MB/375MB c49e0ee60bfb Extracting [================================================> ] 103.1MB/107.3MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB eabd8714fec9 Downloading [===============================> ] 238.4MB/375MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB eabd8714fec9 Downloading [=================================> ] 254.1MB/375MB c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB eabd8714fec9 Downloading [====================================> ] 270.3MB/375MB c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB eca0188f477e Pull complete 46eab5b44a35 Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B c01e672f2391 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB c49e0ee60bfb Pull complete f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B apex-pdp Pulled 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eabd8714fec9 Downloading [====================================> ] 273.6MB/375MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 9c266ba63f51 Download complete 2e8a7df9c2ee Download complete f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 55f2b468da67 Extracting [==> ] 11.14MB/257.9MB 10f05dd8b1db Download complete eabd8714fec9 Downloading [======================================> ] 288.7MB/375MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 384497dbce3b Extracting [> ] 557.1kB/63.48MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete c4d302cc468d Extracting [=================================================> ] 4.522MB/4.534MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB eabd8714fec9 Downloading [========================================> ] 306MB/375MB 55f2b468da67 Extracting [===> ] 16.71MB/257.9MB f3a82e9f1761 Downloading [=> ] 1.375MB/44.41MB da3ed5db7103 Downloading [==> ] 6.487MB/127.4MB eabd8714fec9 Downloading [==========================================> ] 318.5MB/375MB 55f2b468da67 Extracting [====> ] 22.28MB/257.9MB 384497dbce3b Extracting [> ] 1.114MB/63.48MB f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB eabd8714fec9 Downloading [===========================================> ] 327.6MB/375MB da3ed5db7103 Downloading [======> ] 17.3MB/127.4MB f3a82e9f1761 Downloading [========> ] 7.798MB/44.41MB eabd8714fec9 Downloading [=============================================> ] 341.7MB/375MB da3ed5db7103 Downloading [============> ] 31.9MB/127.4MB f3a82e9f1761 Downloading [=======================> ] 21.1MB/44.41MB eabd8714fec9 Downloading [================================================> ] 360.6MB/375MB da3ed5db7103 Downloading [===================> ] 50.28MB/127.4MB f3a82e9f1761 Downloading [=======================================> ] 35.32MB/44.41MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB eabd8714fec9 Downloading [=================================================> ] 372.5MB/375MB da3ed5db7103 Downloading [========================> ] 62.18MB/127.4MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Download complete f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete da3ed5db7103 Downloading [============================> ] 71.91MB/127.4MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 55f2b468da67 Extracting [======> ] 35.65MB/257.9MB da3ed5db7103 Downloading [================================> ] 83.8MB/127.4MB 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [=========> ] 46.79MB/257.9MB da3ed5db7103 Downloading [=======================================> ] 100MB/127.4MB 55f2b468da67 Extracting [=========> ] 51.25MB/257.9MB da3ed5db7103 Downloading [=============================================> ] 115.2MB/127.4MB 384497dbce3b Extracting [===> ] 3.899MB/63.48MB 408012a7b118 Pull complete e444bcd4d577 Pull complete da3ed5db7103 Downloading [==============================================> ] 117.9MB/127.4MB 55f2b468da67 Extracting [==========> ] 53.48MB/257.9MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [============> ] 62.95MB/257.9MB 55f2b468da67 Extracting [=============> ] 68.52MB/257.9MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB c4d302cc468d Pull complete 55f2b468da67 Extracting [===============> ] 78.54MB/257.9MB 384497dbce3b Extracting [=====> ] 6.685MB/63.48MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 55f2b468da67 Extracting [================> ] 83.56MB/257.9MB 01e0882c90d9 Extracting [===========> ] 327.7kB/1.447MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [=================> ] 91.91MB/257.9MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB eabd8714fec9 Extracting [=> ] 10.03MB/375MB 55f2b468da67 Extracting [===================> ] 100.8MB/257.9MB 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB eabd8714fec9 Extracting [=> ] 13.93MB/375MB eabd8714fec9 Extracting [==> ] 15.04MB/375MB 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 01e0882c90d9 Pull complete 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [==> ] 20.05MB/375MB 384497dbce3b Extracting [========> ] 11.14MB/63.48MB 55f2b468da67 Extracting [====================> ] 107.5MB/257.9MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB 384497dbce3b Extracting [==========> ] 12.81MB/63.48MB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB eabd8714fec9 Extracting [====> ] 32.31MB/375MB 55f2b468da67 Extracting [======================> ] 117MB/257.9MB 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 531ee2cf3c0c Extracting [========================> ] 4.03MB/8.066MB eabd8714fec9 Extracting [=====> ] 40.11MB/375MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB 531ee2cf3c0c Extracting [================================> ] 5.308MB/8.066MB 384497dbce3b Extracting [==============> ] 17.83MB/63.48MB eabd8714fec9 Extracting [======> ] 47.35MB/375MB 531ee2cf3c0c Extracting [======================================> ] 6.291MB/8.066MB 55f2b468da67 Extracting [=======================> ] 123.1MB/257.9MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 384497dbce3b Extracting [================> ] 21.17MB/63.48MB eabd8714fec9 Extracting [======> ] 52.36MB/375MB 55f2b468da67 Extracting [========================> ] 128.1MB/257.9MB eabd8714fec9 Extracting [========> ] 60.72MB/375MB bf70c5107ab5 Pull complete 384497dbce3b Extracting [==================> ] 23.4MB/63.48MB 55f2b468da67 Extracting [=========================> ] 132MB/257.9MB eabd8714fec9 Extracting [========> ] 62.95MB/375MB 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB 55f2b468da67 Extracting [==========================> ] 134.3MB/257.9MB eabd8714fec9 Extracting [=========> ] 70.75MB/375MB 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB 55f2b468da67 Extracting [==========================> ] 138.7MB/257.9MB eabd8714fec9 Extracting [==========> ] 80.77MB/375MB 384497dbce3b Extracting [========================> ] 31.2MB/63.48MB 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB eabd8714fec9 Extracting [============> ] 91.36MB/375MB 384497dbce3b Extracting [==========================> ] 33.98MB/63.48MB 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 384497dbce3b Extracting [===========================> ] 35.09MB/63.48MB 55f2b468da67 Extracting [=============================> ] 154.3MB/257.9MB eabd8714fec9 Extracting [=============> ] 104.2MB/375MB 384497dbce3b Extracting [=============================> ] 37.88MB/63.48MB 55f2b468da67 Extracting [==============================> ] 158.2MB/257.9MB eabd8714fec9 Extracting [==============> ] 109.2MB/375MB 384497dbce3b Extracting [================================> ] 41.22MB/63.48MB 55f2b468da67 Extracting [===============================> ] 161.5MB/257.9MB eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 384497dbce3b Extracting [==================================> ] 44.01MB/63.48MB 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB eabd8714fec9 Extracting [===============> ] 117MB/375MB 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB eabd8714fec9 Extracting [================> ] 121.4MB/375MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB eabd8714fec9 Extracting [================> ] 126.5MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 384497dbce3b Extracting [========================================> ] 51.81MB/63.48MB eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 531ee2cf3c0c Pull complete 384497dbce3b Extracting [==========================================> ] 53.48MB/63.48MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 384497dbce3b Extracting [==============================================> ] 58.49MB/63.48MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB eabd8714fec9 Extracting [==================> ] 137.6MB/375MB eabd8714fec9 Extracting [==================> ] 139.8MB/375MB eabd8714fec9 Extracting [===================> ] 145.4MB/375MB 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [===================> ] 148.7MB/375MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 384497dbce3b Extracting [===============================================> ] 60.16MB/63.48MB 1ccde423731d Pull complete 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB eabd8714fec9 Extracting [====================> ] 151.5MB/375MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 384497dbce3b Extracting [=================================================> ] 62.95MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB eabd8714fec9 Extracting [====================> ] 153.7MB/375MB 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB eabd8714fec9 Extracting [=====================> ] 162.7MB/375MB 55f2b468da67 Extracting [====================================> ] 187.7MB/257.9MB eabd8714fec9 Extracting [======================> ] 171MB/375MB 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB eabd8714fec9 Extracting [========================> ] 186.6MB/375MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB eabd8714fec9 Extracting [==========================> ] 196.6MB/375MB eabd8714fec9 Extracting [===========================> ] 209.5MB/375MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB ed54a7dee1d8 Pull complete 384497dbce3b Pull complete 7221d93db8a9 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB eabd8714fec9 Extracting [==============================> ] 229.5MB/375MB eabd8714fec9 Extracting [===============================> ] 234.5MB/375MB 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB eabd8714fec9 Extracting [================================> ] 246.2MB/375MB 55f2b468da67 Extracting [========================================> ] 209.5MB/257.9MB eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB eabd8714fec9 Extracting [==================================> ] 256.8MB/375MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB eabd8714fec9 Extracting [===================================> ] 266.3MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 12c5c803443f Pull complete 7df673c7455d Pull complete 055b9255fa03 Pull complete b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB eabd8714fec9 Extracting [====================================> ] 273MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 55f2b468da67 Extracting [=============================================> ] 233.4MB/257.9MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB eabd8714fec9 Extracting [====================================> ] 275.7MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 55f2b468da67 Extracting [==============================================> ] 238.4MB/257.9MB eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB eabd8714fec9 Extracting [=====================================> ] 279.6MB/375MB 55f2b468da67 Extracting [==============================================> ] 239MB/257.9MB eabd8714fec9 Extracting [======================================> ] 285.2MB/375MB eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB prometheus Pulled eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB b176d7edde70 Pull complete e27c75a98748 Pull complete 55f2b468da67 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB grafana Pulled eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 82bfc142787e Extracting [====================> ] 3.539MB/8.613MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB e73cb4a42719 Extracting [====> ] 9.47MB/109.1MB 46baca71a4ef Pull complete eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB b0e0ef7895f4 Extracting [==========================> ] 19.27MB/37.01MB e73cb4a42719 Extracting [===========> ] 25.07MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB b0e0ef7895f4 Extracting [===================================> ] 26.35MB/37.01MB e73cb4a42719 Extracting [============> ] 27.85MB/109.1MB b0e0ef7895f4 Extracting [================================================> ] 36.18MB/37.01MB eabd8714fec9 Extracting [==========================================> ] 320.9MB/375MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB e73cb4a42719 Extracting [==============> ] 31.75MB/109.1MB b0e0ef7895f4 Pull complete eabd8714fec9 Extracting [==========================================> ] 322MB/375MB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB e73cb4a42719 Extracting [===============> ] 34.54MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B e73cb4a42719 Extracting [===================> ] 42.89MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB e73cb4a42719 Extracting [=====================> ] 47.91MB/109.1MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB e040ea11fa10 Pull complete eabd8714fec9 Extracting [============================================> ] 333.1MB/375MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 338.1MB/375MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 09d5a3f70313 Extracting [====> ] 10.03MB/109.2MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 09d5a3f70313 Extracting [========> ] 19.5MB/109.2MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB e73cb4a42719 Extracting [============================> ] 62.95MB/109.1MB 09d5a3f70313 Extracting [=============> ] 28.41MB/109.2MB e73cb4a42719 Extracting [===============================> ] 69.63MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 09d5a3f70313 Extracting [==================> ] 40.67MB/109.2MB e73cb4a42719 Extracting [==================================> ] 74.65MB/109.1MB 09d5a3f70313 Extracting [=========================> ] 54.59MB/109.2MB e73cb4a42719 Extracting [====================================> ] 79.66MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 09d5a3f70313 Extracting [==============================> ] 65.73MB/109.2MB e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 344.3MB/375MB 09d5a3f70313 Extracting [===================================> ] 77.43MB/109.2MB e73cb4a42719 Extracting [=========================================> ] 89.69MB/109.1MB 09d5a3f70313 Extracting [=========================================> ] 90.24MB/109.2MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 09d5a3f70313 Extracting [=============================================> ] 98.6MB/109.2MB eabd8714fec9 Extracting [==============================================> ] 349.3MB/375MB e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB 09d5a3f70313 Extracting [=================================================> ] 108.1MB/109.2MB e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 09d5a3f70313 Pull complete e73cb4a42719 Extracting [==============================================> ] 101.4MB/109.1MB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB eabd8714fec9 Extracting [===============================================> ] 357.6MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 356f5c2c843b Pull complete kafka Pulled eabd8714fec9 Extracting [================================================> ] 366.5MB/375MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB eabd8714fec9 Extracting [=================================================> ] 372.1MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB e73cb4a42719 Pull complete eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Pull complete 45fd2fec8a19 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [=======================> ] 4.129MB/8.768MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB f963a77d2726 Pull complete 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 7e568a0dc8fb Pull complete postgres Pulled f3a82e9f1761 Extracting [============> ] 11.47MB/44.41MB f3a82e9f1761 Extracting [=============================> ] 26.15MB/44.41MB f3a82e9f1761 Extracting [============================================> ] 39.91MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 13.37MB/127.4MB da3ed5db7103 Extracting [==========> ] 27.3MB/127.4MB da3ed5db7103 Extracting [=================> ] 44.01MB/127.4MB da3ed5db7103 Extracting [========================> ] 61.28MB/127.4MB da3ed5db7103 Extracting [==============================> ] 77.43MB/127.4MB da3ed5db7103 Extracting [====================================> ] 94.14MB/127.4MB da3ed5db7103 Extracting [==========================================> ] 108.6MB/127.4MB da3ed5db7103 Extracting [==============================================> ] 119.2MB/127.4MB da3ed5db7103 Extracting [================================================> ] 124.2MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container postgres Creating Container prometheus Creating Container simulator Creating Container prometheus Created Container grafana Creating Container simulator Created Container zookeeper Created Container kafka Creating Container postgres Created Container policy-db-migrator Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container prometheus Starting Container zookeeper Starting Container postgres Starting Container simulator Starting Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container simulator Started Container policy-db-migrator Started Container policy-api Starting Container zookeeper Started Container kafka Starting Container policy-api Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for policy-pap to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Checking if REST port 30001 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Cloning into '/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/models'... Building robot framework docker image sha256:ed9c5349db1af5745ef3cdab7d27acbc0188986187d10f02af66754009726f80 top - 23:14:56 up 4 min, 0 users, load average: 1.72, 1.55, 0.69 Tasks: 233 total, 1 running, 155 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.8 us, 3.7 sy, 0.0 ni, 77.5 id, 3.8 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 20G 28M 8.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 93eebf85df2b policy-apex-pdp 0.73% 220.3MiB / 31.41GiB 0.68% 50.6kB / 66.4kB 0B / 0B 52 783b301cb982 policy-pap 0.65% 475MiB / 31.41GiB 1.48% 131kB / 217kB 0B / 139MB 69 b1d14180a68b policy-api 0.10% 421.9MiB / 31.41GiB 1.31% 1.15MB / 1.02MB 0B / 0B 59 8d70285af2bc kafka 3.22% 392.8MiB / 31.41GiB 1.22% 204kB / 185kB 0B / 598kB 83 9caeb73f6871 grafana 0.25% 108.4MiB / 31.41GiB 0.34% 19.1MB / 182kB 0B / 30.5MB 24 7a98abfe67ca postgres 0.00% 85.13MiB / 31.41GiB 0.26% 1.67MB / 1.73MB 225kB / 158MB 26 b99bb60d1946 zookeeper 0.09% 83.61MiB / 31.41GiB 0.26% 53.5kB / 45.3kB 0B / 406kB 62 f5f184b4b286 simulator 0.08% 122.8MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 64 5e03a8f54074 prometheus 0.00% 20.85MiB / 31.41GiB 0.06% 133kB / 6.09kB 4.1kB / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-14T23:12:46.924902973Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-14T23:12:46Z grafana | logger=settings t=2025-06-14T23:12:46.925387029Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-14T23:12:46.92542804Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-14T23:12:46.925455551Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-14T23:12:46.925481713Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-14T23:12:46.925504653Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-14T23:12:46.925529434Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-14T23:12:46.925562435Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-14T23:12:46.925592566Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-14T23:12:46.925650998Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-14T23:12:46.925698539Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-14T23:12:46.925737481Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-14T23:12:46.925784712Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-14T23:12:46.925841964Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-14T23:12:46.925871505Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-14T23:12:46.925900636Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-14T23:12:46.925942497Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-14T23:12:46.925969838Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-14T23:12:46.926007199Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-14T23:12:46.926455923Z level=info msg=FeatureToggles ssoSettingsSAML=true cloudWatchRoundUpEndTime=true dashgpt=true newDashboardSharingComponent=true panelMonitoring=true newPDFRendering=true kubernetesClientDashboardsFolders=true lokiLabelNamesQueryApi=true transformationsRedesign=true recordedQueriesMulti=true alertingQueryAndExpressionsStepMode=true lokiQuerySplitting=true prometheusAzureOverrideAudience=true alertingUIOptimizeReducer=true promQLScope=true newFiltersUI=true annotationPermissionUpdate=true unifiedStorageSearchPermissionFiltering=true preinstallAutoUpdate=true alertingApiServer=true nestedFolders=true alertingInsights=true pluginsDetailsRightPanel=true onPremToCloudMigrations=true formatString=true prometheusUsesCombobox=true dataplaneFrontendFallback=true addFieldFromCalculationStatFunctions=true tlsMemcached=true lokiStructuredMetadata=true lokiQueryHints=true logRowsPopoverMenu=true alertingSimplifiedRouting=true alertingNotificationsStepMode=true groupToNestedTableTransformation=true recoveryThreshold=true influxdbBackendMigration=true alertRuleRestore=true grafanaconThemes=true logsInfiniteScrolling=true logsContextDatasourceUi=true angularDeprecationUI=true failWrongDSUID=true dashboardScene=true pinNavItems=true dashboardSceneForViewers=true alertingRuleVersionHistoryRestore=true unifiedRequestLog=true dashboardSceneSolo=true cloudWatchCrossAccountQuerying=true alertingRuleRecoverDeleted=true reportingUseRawTimeRange=true correlations=true cloudWatchNewLabelParsing=true alertingRulePermanentlyDelete=true kubernetesPlaylists=true useSessionStorageForRedirection=true azureMonitorPrometheusExemplars=true publicDashboardsScene=true externalCorePlugins=true awsAsyncQueryCaching=true azureMonitorEnableUserAuth=true logsExploreTableVisualisation=true ssoSettingsApi=true logsPanelControls=true grafana | logger=sqlstore t=2025-06-14T23:12:46.926568137Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-14T23:12:46.926610418Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-14T23:12:46.92855536Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-14T23:12:46.928602822Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-14T23:12:46.929388927Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-14T23:12:46.93044409Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.054604ms grafana | logger=migrator t=2025-06-14T23:12:46.942564856Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-14T23:12:46.944395964Z level=info msg="Migration successfully executed" id="create user table" duration=1.831228ms grafana | logger=migrator t=2025-06-14T23:12:46.950143147Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-14T23:12:46.951111938Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=968.731µs grafana | logger=migrator t=2025-06-14T23:12:46.955955902Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-14T23:12:46.956745607Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=789.545µs grafana | logger=migrator t=2025-06-14T23:12:46.964881746Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-14T23:12:46.966072194Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.189408ms grafana | logger=migrator t=2025-06-14T23:12:46.971859538Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-14T23:12:46.973399968Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.539709ms grafana | logger=migrator t=2025-06-14T23:12:46.978982035Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-14T23:12:46.98163551Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.654085ms grafana | logger=migrator t=2025-06-14T23:12:46.986497674Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-14T23:12:46.987439174Z level=info msg="Migration successfully executed" id="create user table v2" duration=940.86µs grafana | logger=migrator t=2025-06-14T23:12:46.995633695Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-14T23:12:46.997609257Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.936021ms grafana | logger=migrator t=2025-06-14T23:12:47.001740789Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-14T23:12:47.003211086Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.468357ms grafana | logger=migrator t=2025-06-14T23:12:47.008555817Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:47.008885418Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=329.231µs grafana | logger=migrator t=2025-06-14T23:12:47.017177924Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-14T23:12:47.018205548Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.025823ms grafana | logger=migrator t=2025-06-14T23:12:47.044764869Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-14T23:12:47.047485326Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.716607ms grafana | logger=migrator t=2025-06-14T23:12:47.051680481Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-14T23:12:47.051782174Z level=info msg="Migration successfully executed" id="Update user table charset" duration=76.932µs grafana | logger=migrator t=2025-06-14T23:12:47.056613048Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-14T23:12:47.057775006Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.161558ms grafana | logger=migrator t=2025-06-14T23:12:47.0654106Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-14T23:12:47.065746402Z level=info msg="Migration successfully executed" id="Add missing user data" duration=338.242µs grafana | logger=migrator t=2025-06-14T23:12:47.071641801Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-14T23:12:47.074264595Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.622744ms grafana | logger=migrator t=2025-06-14T23:12:47.079997389Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-14T23:12:47.080812705Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=814.816µs grafana | logger=migrator t=2025-06-14T23:12:47.088805011Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-14T23:12:47.090053001Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.24728ms grafana | logger=migrator t=2025-06-14T23:12:47.095355661Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-14T23:12:47.105492576Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.139775ms grafana | logger=migrator t=2025-06-14T23:12:47.110392904Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-14T23:12:47.111271022Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=880.697µs grafana | logger=migrator t=2025-06-14T23:12:47.114957669Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-14T23:12:47.115180717Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=222.027µs grafana | logger=migrator t=2025-06-14T23:12:47.122789181Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-14T23:12:47.12369995Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=910.819µs grafana | logger=migrator t=2025-06-14T23:12:47.129682001Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-14T23:12:47.132505132Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.820191ms grafana | logger=migrator t=2025-06-14T23:12:47.137628356Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-14T23:12:47.138090461Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=461.125µs grafana | logger=migrator t=2025-06-14T23:12:47.141568463Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-14T23:12:47.142275916Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=706.543µs grafana | logger=migrator t=2025-06-14T23:12:47.149372523Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-14T23:12:47.150252852Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=878.818µs grafana | logger=migrator t=2025-06-14T23:12:47.155419447Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-14T23:12:47.156202032Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=781.335µs grafana | logger=migrator t=2025-06-14T23:12:47.162438472Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-14T23:12:47.163468315Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.029663ms grafana | logger=migrator t=2025-06-14T23:12:47.170773079Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-14T23:12:47.172484145Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.710715ms grafana | logger=migrator t=2025-06-14T23:12:47.178102135Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-14T23:12:47.178885399Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=782.824µs grafana | logger=migrator t=2025-06-14T23:12:47.182898108Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-14T23:12:47.183719755Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=821.157µs grafana | logger=migrator t=2025-06-14T23:12:47.187581308Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-14T23:12:47.188685903Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.103785ms grafana | logger=migrator t=2025-06-14T23:12:47.217314302Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-14T23:12:47.217542399Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=227.377µs grafana | logger=migrator t=2025-06-14T23:12:47.223546092Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-14T23:12:47.22474464Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.198898ms grafana | logger=migrator t=2025-06-14T23:12:47.230566176Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-14T23:12:47.231843248Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.280132ms grafana | logger=migrator t=2025-06-14T23:12:47.24002414Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-14T23:12:47.240802775Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=778.515µs grafana | logger=migrator t=2025-06-14T23:12:47.24753792Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-14T23:12:47.249193914Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.656344ms grafana | logger=migrator t=2025-06-14T23:12:47.25532714Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:47.258543933Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.215643ms grafana | logger=migrator t=2025-06-14T23:12:47.266402346Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-14T23:12:47.267308815Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=905.65µs grafana | logger=migrator t=2025-06-14T23:12:47.272820371Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-14T23:12:47.274577018Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.754747ms grafana | logger=migrator t=2025-06-14T23:12:47.281825981Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-14T23:12:47.282670647Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=846.647µs grafana | logger=migrator t=2025-06-14T23:12:47.548354727Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-14T23:12:47.550362032Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=2.009864ms grafana | logger=migrator t=2025-06-14T23:12:47.679970227Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-14T23:12:47.682107276Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=2.135969ms grafana | logger=migrator t=2025-06-14T23:12:47.686867608Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:47.687549811Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=681.712µs grafana | logger=migrator t=2025-06-14T23:12:47.693792331Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:47.69441865Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=626.05µs grafana | logger=migrator t=2025-06-14T23:12:47.697727676Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-14T23:12:47.698344706Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=616.53µs grafana | logger=migrator t=2025-06-14T23:12:47.704574346Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-14T23:12:47.706155417Z level=info msg="Migration successfully executed" id="create star table" duration=1.58448ms grafana | logger=migrator t=2025-06-14T23:12:47.709808223Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-14T23:12:47.710590369Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=781.976µs grafana | logger=migrator t=2025-06-14T23:12:47.717047896Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-14T23:12:47.719561517Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.5131ms grafana | logger=migrator t=2025-06-14T23:12:47.726703086Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-14T23:12:47.728571185Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.867739ms grafana | logger=migrator t=2025-06-14T23:12:47.733040379Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-14T23:12:47.734589088Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.548369ms grafana | logger=migrator t=2025-06-14T23:12:47.738846325Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-14T23:12:47.739747794Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=900.249µs grafana | logger=migrator t=2025-06-14T23:12:47.744472456Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-14T23:12:47.745198269Z level=info msg="Migration successfully executed" id="create org table v1" duration=724.683µs grafana | logger=migrator t=2025-06-14T23:12:47.749158236Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-14T23:12:47.75022389Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.064224ms grafana | logger=migrator t=2025-06-14T23:12:47.755939933Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-14T23:12:47.757123531Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.183838ms grafana | logger=migrator t=2025-06-14T23:12:47.761243323Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-14T23:12:47.762327548Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.084555ms grafana | logger=migrator t=2025-06-14T23:12:47.76833968Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-14T23:12:47.769211609Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=871.289µs grafana | logger=migrator t=2025-06-14T23:12:47.773392993Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-14T23:12:47.774206648Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=813.026µs grafana | logger=migrator t=2025-06-14T23:12:47.777842316Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-14T23:12:47.777876577Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.282µs grafana | logger=migrator t=2025-06-14T23:12:47.781557044Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-14T23:12:47.781624977Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=67.872µs grafana | logger=migrator t=2025-06-14T23:12:47.788085423Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-14T23:12:47.788400934Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=315.601µs grafana | logger=migrator t=2025-06-14T23:12:47.792103703Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-14T23:12:47.793346472Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.243069ms grafana | logger=migrator t=2025-06-14T23:12:47.798535649Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-14T23:12:47.799377315Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=841.266µs grafana | logger=migrator t=2025-06-14T23:12:47.825793913Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-14T23:12:47.827488487Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.704534ms grafana | logger=migrator t=2025-06-14T23:12:47.834056558Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-14T23:12:47.834795232Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=737.943µs grafana | logger=migrator t=2025-06-14T23:12:47.838953585Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-14T23:12:47.840422332Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.467317ms grafana | logger=migrator t=2025-06-14T23:12:47.847067015Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-14T23:12:47.848325416Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.25729ms grafana | logger=migrator t=2025-06-14T23:12:47.857107227Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-14T23:12:47.865584459Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.477572ms grafana | logger=migrator t=2025-06-14T23:12:47.869723642Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-14T23:12:47.870530708Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=810.976µs grafana | logger=migrator t=2025-06-14T23:12:47.874284138Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-14T23:12:47.875149516Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=861.208µs grafana | logger=migrator t=2025-06-14T23:12:47.882908734Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-14T23:12:47.88431369Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.403986ms grafana | logger=migrator t=2025-06-14T23:12:47.888550395Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:47.889221807Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=669.612µs grafana | logger=migrator t=2025-06-14T23:12:47.893273377Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-14T23:12:47.894193786Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=920.069µs grafana | logger=migrator t=2025-06-14T23:12:47.899963831Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-14T23:12:47.899997042Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=38.421µs grafana | logger=migrator t=2025-06-14T23:12:47.904184037Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-14T23:12:47.907273426Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.087719ms grafana | logger=migrator t=2025-06-14T23:12:47.911578444Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-14T23:12:47.913521366Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.941822ms grafana | logger=migrator t=2025-06-14T23:12:47.917415711Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.919339442Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.922831ms grafana | logger=migrator t=2025-06-14T23:12:47.924975003Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.925823841Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=848.748µs grafana | logger=migrator t=2025-06-14T23:12:47.929708525Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.931673688Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.964533ms grafana | logger=migrator t=2025-06-14T23:12:47.935522281Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.93641331Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=890.179µs grafana | logger=migrator t=2025-06-14T23:12:47.942427563Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-14T23:12:47.943742955Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.315292ms grafana | logger=migrator t=2025-06-14T23:12:47.948197108Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-14T23:12:47.948296431Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=100.283µs grafana | logger=migrator t=2025-06-14T23:12:47.954567872Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-14T23:12:47.954595783Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.501µs grafana | logger=migrator t=2025-06-14T23:12:47.962580289Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.964925165Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.347776ms grafana | logger=migrator t=2025-06-14T23:12:47.968712796Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.97103013Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.317054ms grafana | logger=migrator t=2025-06-14T23:12:47.990580207Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-14T23:12:47.994055838Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.470061ms grafana | logger=migrator t=2025-06-14T23:12:48.001227879Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-14T23:12:48.003391727Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.162738ms grafana | logger=migrator t=2025-06-14T23:12:48.007310382Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-14T23:12:48.007625002Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=313.91µs grafana | logger=migrator t=2025-06-14T23:12:48.01108871Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:48.011994888Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=906.568µs grafana | logger=migrator t=2025-06-14T23:12:48.017998567Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-14T23:12:48.018818474Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=819.567µs grafana | logger=migrator t=2025-06-14T23:12:48.022453258Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-14T23:12:48.022492759Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.641µs grafana | logger=migrator t=2025-06-14T23:12:48.026687732Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-14T23:12:48.028104826Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.416054ms grafana | logger=migrator t=2025-06-14T23:12:48.03619374Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-14T23:12:48.037079728Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=885.228µs grafana | logger=migrator t=2025-06-14T23:12:48.040581059Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:48.04795792Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.376331ms grafana | logger=migrator t=2025-06-14T23:12:48.052323888Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-14T23:12:48.053220916Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=896.548µs grafana | logger=migrator t=2025-06-14T23:12:48.061152526Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-14T23:12:48.062326072Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.172386ms grafana | logger=migrator t=2025-06-14T23:12:48.071890234Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-14T23:12:48.073444082Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.553458ms grafana | logger=migrator t=2025-06-14T23:12:48.079147762Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:48.079608416Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=459.954µs grafana | logger=migrator t=2025-06-14T23:12:48.084400337Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:48.085172391Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=770.644µs grafana | logger=migrator t=2025-06-14T23:12:48.092207793Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-14T23:12:48.09498254Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.772447ms grafana | logger=migrator t=2025-06-14T23:12:48.136417475Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-14T23:12:48.138787759Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=2.370154ms grafana | logger=migrator t=2025-06-14T23:12:48.162269028Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-14T23:12:48.162764763Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=496.385µs grafana | logger=migrator t=2025-06-14T23:12:48.170635141Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-14T23:12:48.170892389Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=256.468µs grafana | logger=migrator t=2025-06-14T23:12:48.174869764Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-14T23:12:48.176259748Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.388664ms grafana | logger=migrator t=2025-06-14T23:12:48.180085548Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-14T23:12:48.183398223Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.311795ms grafana | logger=migrator t=2025-06-14T23:12:48.189534236Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-14T23:12:48.192662725Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.127408ms grafana | logger=migrator t=2025-06-14T23:12:48.197859598Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-14T23:12:48.198700294Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=840.546µs grafana | logger=migrator t=2025-06-14T23:12:48.202098801Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-14T23:12:48.204463186Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.363575ms grafana | logger=migrator t=2025-06-14T23:12:48.211124685Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-14T23:12:48.213930573Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.809228ms grafana | logger=migrator t=2025-06-14T23:12:48.218293521Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-14T23:12:48.218766886Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=473.465µs grafana | logger=migrator t=2025-06-14T23:12:48.222218044Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-14T23:12:48.224587499Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.367725ms grafana | logger=migrator t=2025-06-14T23:12:48.230767923Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-14T23:12:48.231673732Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=905.399µs grafana | logger=migrator t=2025-06-14T23:12:48.235484372Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-14T23:12:48.235981847Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=496.205µs grafana | logger=migrator t=2025-06-14T23:12:48.241094198Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-14T23:12:48.242167863Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.069355ms grafana | logger=migrator t=2025-06-14T23:12:48.251329481Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-14T23:12:48.253091476Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.760435ms grafana | logger=migrator t=2025-06-14T23:12:48.256940077Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-14T23:12:48.257867636Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=927.159µs grafana | logger=migrator t=2025-06-14T23:12:48.261203611Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-14T23:12:48.262657717Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.453436ms grafana | logger=migrator t=2025-06-14T23:12:48.270423642Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-14T23:12:48.271802385Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.378243ms grafana | logger=migrator t=2025-06-14T23:12:48.278686732Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-14T23:12:48.28594179Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.256768ms grafana | logger=migrator t=2025-06-14T23:12:48.291863236Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-14T23:12:48.293343173Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.480057ms grafana | logger=migrator t=2025-06-14T23:12:48.351223814Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-14T23:12:48.352474634Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.25079ms grafana | logger=migrator t=2025-06-14T23:12:48.357003767Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-14T23:12:48.358260506Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.255999ms grafana | logger=migrator t=2025-06-14T23:12:48.367161596Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-14T23:12:48.367857357Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=694.281µs grafana | logger=migrator t=2025-06-14T23:12:48.371194713Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-14T23:12:48.37587212Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.675977ms grafana | logger=migrator t=2025-06-14T23:12:48.379855696Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-14T23:12:48.382248491Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.392895ms grafana | logger=migrator t=2025-06-14T23:12:48.388323532Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-14T23:12:48.388363193Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=39.721µs grafana | logger=migrator t=2025-06-14T23:12:48.396029344Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-14T23:12:48.396374956Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=348.942µs grafana | logger=migrator t=2025-06-14T23:12:48.403635313Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-14T23:12:48.406346589Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.710856ms grafana | logger=migrator t=2025-06-14T23:12:48.416258031Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-14T23:12:48.416787408Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=532.117µs grafana | logger=migrator t=2025-06-14T23:12:48.425377038Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-14T23:12:48.425817832Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=442.634µs grafana | logger=migrator t=2025-06-14T23:12:48.432380218Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-14T23:12:48.434829935Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.452767ms grafana | logger=migrator t=2025-06-14T23:12:48.442734204Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-14T23:12:48.442981552Z level=info msg="Migration successfully executed" id="Update uid value" duration=248.008µs grafana | logger=migrator t=2025-06-14T23:12:48.446909616Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:48.448216247Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.30594ms grafana | logger=migrator t=2025-06-14T23:12:48.454558906Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-14T23:12:48.455410543Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=850.987µs grafana | logger=migrator t=2025-06-14T23:12:48.460139511Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-14T23:12:48.462658371Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.51851ms grafana | logger=migrator t=2025-06-14T23:12:48.465922993Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-14T23:12:48.468404512Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.480859ms grafana | logger=migrator t=2025-06-14T23:12:48.474141832Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-14T23:12:48.474165313Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=24.211µs grafana | logger=migrator t=2025-06-14T23:12:48.477773327Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-14T23:12:48.47849399Z level=info msg="Migration successfully executed" id="create api_key table" duration=720.502µs grafana | logger=migrator t=2025-06-14T23:12:48.484972353Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-14T23:12:48.487005227Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=2.036544ms grafana | logger=migrator t=2025-06-14T23:12:48.490671822Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-14T23:12:48.491997305Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.324263ms grafana | logger=migrator t=2025-06-14T23:12:48.49790146Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-14T23:12:48.498871781Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=970.5µs grafana | logger=migrator t=2025-06-14T23:12:48.549942377Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-14T23:12:48.551411804Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.472647ms grafana | logger=migrator t=2025-06-14T23:12:48.558239429Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-14T23:12:48.559039285Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=799.215µs grafana | logger=migrator t=2025-06-14T23:12:48.660373913Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-14T23:12:48.661445117Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.072433ms grafana | logger=migrator t=2025-06-14T23:12:48.695005993Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-14T23:12:48.705292487Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.287414ms grafana | logger=migrator t=2025-06-14T23:12:48.708379244Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-14T23:12:48.709030184Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=651.02µs grafana | logger=migrator t=2025-06-14T23:12:48.718365919Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-14T23:12:48.719396541Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.031202ms grafana | logger=migrator t=2025-06-14T23:12:48.723916652Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-14T23:12:48.724728729Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=812.066µs grafana | logger=migrator t=2025-06-14T23:12:48.72764263Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-14T23:12:48.728578829Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=934.679µs grafana | logger=migrator t=2025-06-14T23:12:48.738377988Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:48.73874517Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=367.382µs grafana | logger=migrator t=2025-06-14T23:12:48.742117476Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-14T23:12:48.742730005Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=611.689µs grafana | logger=migrator t=2025-06-14T23:12:48.745849503Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-14T23:12:48.745880554Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=33.311µs grafana | logger=migrator t=2025-06-14T23:12:48.749395385Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-14T23:12:48.752366938Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.931302ms grafana | logger=migrator t=2025-06-14T23:12:48.757772109Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-14T23:12:48.760497254Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.721245ms grafana | logger=migrator t=2025-06-14T23:12:48.764281613Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-14T23:12:48.76449019Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=208.647µs grafana | logger=migrator t=2025-06-14T23:12:48.767821164Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-14T23:12:48.770500659Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.679135ms grafana | logger=migrator t=2025-06-14T23:12:48.777423187Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-14T23:12:48.780029389Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.621793ms grafana | logger=migrator t=2025-06-14T23:12:48.78323187Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-14T23:12:48.784033965Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=801.695µs grafana | logger=migrator t=2025-06-14T23:12:48.787023099Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-14T23:12:48.787681209Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=657.39µs grafana | logger=migrator t=2025-06-14T23:12:48.795395312Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-14T23:12:48.79628653Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=891.268µs grafana | logger=migrator t=2025-06-14T23:12:48.801618078Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-14T23:12:48.802433464Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=815.186µs grafana | logger=migrator t=2025-06-14T23:12:48.805904243Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-14T23:12:48.806579754Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=675.061µs grafana | logger=migrator t=2025-06-14T23:12:48.821484213Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-14T23:12:48.822164615Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=680.212µs grafana | logger=migrator t=2025-06-14T23:12:48.829279818Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-14T23:12:48.829299679Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=20.731µs grafana | logger=migrator t=2025-06-14T23:12:48.836264029Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-14T23:12:48.83629142Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=27.931µs grafana | logger=migrator t=2025-06-14T23:12:48.902335157Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-14T23:12:48.904705192Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.371205ms grafana | logger=migrator t=2025-06-14T23:12:48.909686269Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-14T23:12:48.911786935Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.099896ms grafana | logger=migrator t=2025-06-14T23:12:48.915411809Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-14T23:12:48.91543052Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=19.551µs grafana | logger=migrator t=2025-06-14T23:12:48.918070943Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-14T23:12:48.918728193Z level=info msg="Migration successfully executed" id="create quota table v1" duration=657.96µs grafana | logger=migrator t=2025-06-14T23:12:48.923896496Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-14T23:12:48.924782344Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=882.078µs grafana | logger=migrator t=2025-06-14T23:12:48.928442279Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-14T23:12:48.92846782Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.051µs grafana | logger=migrator t=2025-06-14T23:12:48.931783455Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-14T23:12:48.933029033Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.244669ms grafana | logger=migrator t=2025-06-14T23:12:48.939012051Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-14T23:12:48.939980613Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=971.542µs grafana | logger=migrator t=2025-06-14T23:12:48.945034541Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-14T23:12:48.948122879Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.088068ms grafana | logger=migrator t=2025-06-14T23:12:48.951708041Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-14T23:12:48.951734502Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=27.231µs grafana | logger=migrator t=2025-06-14T23:12:48.955126959Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-14T23:12:48.955440289Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=316.47µs grafana | logger=migrator t=2025-06-14T23:12:48.960762646Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-14T23:12:48.973343193Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=12.580377ms grafana | logger=migrator t=2025-06-14T23:12:48.979049572Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-14T23:12:48.979589569Z level=info msg="Migration successfully executed" id="create session table" duration=539.577µs grafana | logger=migrator t=2025-06-14T23:12:48.983488481Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-14T23:12:48.983635046Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=147.065µs grafana | logger=migrator t=2025-06-14T23:12:48.990376579Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-14T23:12:48.990566144Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=186.276µs grafana | logger=migrator t=2025-06-14T23:12:48.995892212Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-14T23:12:48.99677402Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=882.008µs grafana | logger=migrator t=2025-06-14T23:12:49.000995652Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-14T23:12:49.001523349Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=529.987µs grafana | logger=migrator t=2025-06-14T23:12:49.004832893Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-14T23:12:49.004856954Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=24.271µs grafana | logger=migrator t=2025-06-14T23:12:49.009946186Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-14T23:12:49.009971257Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=25.541µs grafana | logger=migrator t=2025-06-14T23:12:49.013039935Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-14T23:12:49.016171294Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.130289ms grafana | logger=migrator t=2025-06-14T23:12:49.01981892Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-14T23:12:49.023181037Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.361377ms grafana | logger=migrator t=2025-06-14T23:12:49.059760571Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-14T23:12:49.059907016Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=150.955µs grafana | logger=migrator t=2025-06-14T23:12:49.06570662Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-14T23:12:49.065788824Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=81.984µs grafana | logger=migrator t=2025-06-14T23:12:49.070753491Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-14T23:12:49.071774264Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.023193ms grafana | logger=migrator t=2025-06-14T23:12:49.076879026Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-14T23:12:49.076915667Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=37.491µs grafana | logger=migrator t=2025-06-14T23:12:49.083123075Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-14T23:12:49.086796702Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.673027ms grafana | logger=migrator t=2025-06-14T23:12:49.119142722Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-14T23:12:49.119427391Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=288.029µs grafana | logger=migrator t=2025-06-14T23:12:49.123234652Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-14T23:12:49.126821466Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.586603ms grafana | logger=migrator t=2025-06-14T23:12:49.132136025Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-14T23:12:49.135570834Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.435629ms grafana | logger=migrator t=2025-06-14T23:12:49.138564729Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-14T23:12:49.138583751Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=19.542µs grafana | logger=migrator t=2025-06-14T23:12:49.142448253Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-14T23:12:49.143201247Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=758.804µs grafana | logger=migrator t=2025-06-14T23:12:49.147004139Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-14T23:12:49.148038771Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.035512ms grafana | logger=migrator t=2025-06-14T23:12:49.152863545Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-14T23:12:49.154157006Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.294581ms grafana | logger=migrator t=2025-06-14T23:12:49.158432302Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-14T23:12:49.159405223Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=975.161µs grafana | logger=migrator t=2025-06-14T23:12:49.164413373Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-14T23:12:49.165132725Z level=info msg="Migration successfully executed" id="add index alert state" duration=721.733µs grafana | logger=migrator t=2025-06-14T23:12:49.168851133Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-14T23:12:49.169720552Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=868.979µs grafana | logger=migrator t=2025-06-14T23:12:49.173888694Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-14T23:12:49.174688349Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=786.154µs grafana | logger=migrator t=2025-06-14T23:12:49.178214742Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-14T23:12:49.17912059Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=905.738µs grafana | logger=migrator t=2025-06-14T23:12:49.184077128Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-14T23:12:49.18476291Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=685.772µs grafana | logger=migrator t=2025-06-14T23:12:49.189338186Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-14T23:12:49.197117943Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.772686ms grafana | logger=migrator t=2025-06-14T23:12:49.201070659Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-14T23:12:49.201783312Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=710.742µs grafana | logger=migrator t=2025-06-14T23:12:49.206362678Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-14T23:12:49.207239405Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=877.167µs grafana | logger=migrator t=2025-06-14T23:12:49.244678647Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:49.245210974Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=536.218µs grafana | logger=migrator t=2025-06-14T23:12:49.250876335Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-14T23:12:49.251579457Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=704.203µs grafana | logger=migrator t=2025-06-14T23:12:49.257615029Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-14T23:12:49.258627161Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.013921ms grafana | logger=migrator t=2025-06-14T23:12:49.263969351Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-14T23:12:49.268739493Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.765822ms grafana | logger=migrator t=2025-06-14T23:12:49.274650231Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-14T23:12:49.277529782Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.879021ms grafana | logger=migrator t=2025-06-14T23:12:49.305910716Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-14T23:12:49.308839649Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.923613ms grafana | logger=migrator t=2025-06-14T23:12:49.312672901Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-14T23:12:49.315501001Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.82784ms grafana | logger=migrator t=2025-06-14T23:12:49.318708993Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-14T23:12:49.319453597Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=746.994µs grafana | logger=migrator t=2025-06-14T23:12:49.323458205Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-14T23:12:49.323484136Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.582µs grafana | logger=migrator t=2025-06-14T23:12:49.326185731Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-14T23:12:49.326208601Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=23.66µs grafana | logger=migrator t=2025-06-14T23:12:49.329394854Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-14T23:12:49.330138487Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=743.513µs grafana | logger=migrator t=2025-06-14T23:12:49.334445394Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-14T23:12:49.335379734Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=934.031µs grafana | logger=migrator t=2025-06-14T23:12:49.340713414Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-14T23:12:49.341331453Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=618.409µs grafana | logger=migrator t=2025-06-14T23:12:49.34469487Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-14T23:12:49.345303869Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=608.679µs grafana | logger=migrator t=2025-06-14T23:12:49.349746741Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-14T23:12:49.350449173Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=701.782µs grafana | logger=migrator t=2025-06-14T23:12:49.353844501Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-14T23:12:49.35818571Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.346089ms grafana | logger=migrator t=2025-06-14T23:12:49.362024341Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-14T23:12:49.364943744Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.919623ms grafana | logger=migrator t=2025-06-14T23:12:49.369428277Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-14T23:12:49.369651824Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=223.657µs grafana | logger=migrator t=2025-06-14T23:12:49.374007604Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:49.375211181Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.172487ms grafana | logger=migrator t=2025-06-14T23:12:49.380686456Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-14T23:12:49.381622485Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=936.669µs grafana | logger=migrator t=2025-06-14T23:12:49.387080229Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-14T23:12:49.390377614Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.297725ms grafana | logger=migrator t=2025-06-14T23:12:49.393707251Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-14T23:12:49.393724051Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=17.821µs grafana | logger=migrator t=2025-06-14T23:12:49.512724498Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-14T23:12:49.513698839Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=974.881µs grafana | logger=migrator t=2025-06-14T23:12:49.654137169Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-14T23:12:49.65510357Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=969.741µs grafana | logger=migrator t=2025-06-14T23:12:49.811056094Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-14T23:12:49.811231709Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=179.646µs grafana | logger=migrator t=2025-06-14T23:12:49.919637909Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-14T23:12:49.920695873Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.060624ms grafana | logger=migrator t=2025-06-14T23:12:50.002174806Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-14T23:12:50.004092917Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.738285ms grafana | logger=migrator t=2025-06-14T23:12:50.011354448Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-14T23:12:50.012597667Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.247089ms grafana | logger=migrator t=2025-06-14T23:12:50.020112127Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-14T23:12:50.021504441Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.385225ms grafana | logger=migrator t=2025-06-14T23:12:50.02587303Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-14T23:12:50.026948885Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.075595ms grafana | logger=migrator t=2025-06-14T23:12:50.030605891Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-14T23:12:50.03152737Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=921.219µs grafana | logger=migrator t=2025-06-14T23:12:50.037115348Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-14T23:12:50.037143099Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.451µs grafana | logger=migrator t=2025-06-14T23:12:50.041720295Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.045967789Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.246994ms grafana | logger=migrator t=2025-06-14T23:12:50.049647477Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-14T23:12:50.050527735Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=879.628µs grafana | logger=migrator t=2025-06-14T23:12:50.054435619Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.060918676Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.480397ms grafana | logger=migrator t=2025-06-14T23:12:50.069671344Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-14T23:12:50.071015967Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.345712ms grafana | logger=migrator t=2025-06-14T23:12:50.07455975Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-14T23:12:50.075488909Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=928.879µs grafana | logger=migrator t=2025-06-14T23:12:50.081036546Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-14T23:12:50.081908883Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=871.637µs grafana | logger=migrator t=2025-06-14T23:12:50.086457139Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-14T23:12:50.098103279Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.645081ms grafana | logger=migrator t=2025-06-14T23:12:50.101186117Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-14T23:12:50.101707163Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=520.096µs grafana | logger=migrator t=2025-06-14T23:12:50.108324285Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-14T23:12:50.109340107Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.018862ms grafana | logger=migrator t=2025-06-14T23:12:50.115628447Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-14T23:12:50.115904525Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=275.958µs grafana | logger=migrator t=2025-06-14T23:12:50.118781267Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-14T23:12:50.11947734Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=696.343µs grafana | logger=migrator t=2025-06-14T23:12:50.123118495Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-14T23:12:50.123311471Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=192.886µs grafana | logger=migrator t=2025-06-14T23:12:50.129540709Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.133990611Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.449462ms grafana | logger=migrator t=2025-06-14T23:12:50.152340145Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.157656954Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.317689ms grafana | logger=migrator t=2025-06-14T23:12:50.162348274Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.163248142Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=899.268µs grafana | logger=migrator t=2025-06-14T23:12:50.167709925Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.168563171Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=852.466µs grafana | logger=migrator t=2025-06-14T23:12:50.172011721Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-14T23:12:50.172335282Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=323.411µs grafana | logger=migrator t=2025-06-14T23:12:50.175914256Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-14T23:12:50.18138917Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.475374ms grafana | logger=migrator t=2025-06-14T23:12:50.187120202Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-14T23:12:50.188853527Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.731255ms grafana | logger=migrator t=2025-06-14T23:12:50.192882246Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-14T23:12:50.193109493Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=227.367µs grafana | logger=migrator t=2025-06-14T23:12:50.198076461Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-14T23:12:50.198536596Z level=info msg="Migration successfully executed" id="Move region to single row" duration=459.685µs grafana | logger=migrator t=2025-06-14T23:12:50.203033298Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.203953808Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=920.41µs grafana | logger=migrator t=2025-06-14T23:12:50.210268759Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.211488477Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.219388ms grafana | logger=migrator t=2025-06-14T23:12:50.214877546Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.217386536Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=2.50724ms grafana | logger=migrator t=2025-06-14T23:12:50.224104989Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.225332668Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.232209ms grafana | logger=migrator t=2025-06-14T23:12:50.228285162Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.22914655Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=861.298µs grafana | logger=migrator t=2025-06-14T23:12:50.232107494Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-14T23:12:50.232980812Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=872.958µs grafana | logger=migrator t=2025-06-14T23:12:50.237611299Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-14T23:12:50.23763049Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=19.701µs grafana | logger=migrator t=2025-06-14T23:12:50.240723448Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-14T23:12:50.240750599Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=27.291µs grafana | logger=migrator t=2025-06-14T23:12:50.244038964Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-14T23:12:50.244069225Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=27.9µs grafana | logger=migrator t=2025-06-14T23:12:50.248923569Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-14T23:12:50.2502043Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.27584ms grafana | logger=migrator t=2025-06-14T23:12:50.253770784Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-14T23:12:50.255008603Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.237249ms grafana | logger=migrator t=2025-06-14T23:12:50.258325898Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-14T23:12:50.259192127Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=865.598µs grafana | logger=migrator t=2025-06-14T23:12:50.264836116Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-14T23:12:50.26623016Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.393314ms grafana | logger=migrator t=2025-06-14T23:12:50.269577766Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-14T23:12:50.269898706Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=321.12µs grafana | logger=migrator t=2025-06-14T23:12:50.27313706Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-14T23:12:50.273484751Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=350.621µs grafana | logger=migrator t=2025-06-14T23:12:50.277054855Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-14T23:12:50.277076295Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=21.86µs grafana | logger=migrator t=2025-06-14T23:12:50.28287088Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-14T23:12:50.290447481Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=7.573061ms grafana | logger=migrator t=2025-06-14T23:12:50.294398407Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-14T23:12:50.295877304Z level=info msg="Migration successfully executed" id="create team table" duration=1.477248ms grafana | logger=migrator t=2025-06-14T23:12:50.301054728Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-14T23:12:50.301802773Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=748.074µs grafana | logger=migrator t=2025-06-14T23:12:50.324828455Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-14T23:12:50.326452027Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.622532ms grafana | logger=migrator t=2025-06-14T23:12:50.330081872Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-14T23:12:50.336985342Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.90365ms grafana | logger=migrator t=2025-06-14T23:12:50.340179923Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-14T23:12:50.340399921Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=219.268µs grafana | logger=migrator t=2025-06-14T23:12:50.344847642Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:50.345859464Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.011432ms grafana | logger=migrator t=2025-06-14T23:12:50.349827151Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-14T23:12:50.354450918Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.619747ms grafana | logger=migrator t=2025-06-14T23:12:50.357727972Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-14T23:12:50.362403411Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.674159ms grafana | logger=migrator t=2025-06-14T23:12:50.368912438Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-14T23:12:50.370139187Z level=info msg="Migration successfully executed" id="create team member table" duration=1.227979ms grafana | logger=migrator t=2025-06-14T23:12:50.37338442Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-14T23:12:50.375096505Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.710985ms grafana | logger=migrator t=2025-06-14T23:12:50.378258786Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-14T23:12:50.379427322Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.167666ms grafana | logger=migrator t=2025-06-14T23:12:50.384396401Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-14T23:12:50.386083665Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.686744ms grafana | logger=migrator t=2025-06-14T23:12:50.39189874Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-14T23:12:50.397020173Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.121413ms grafana | logger=migrator t=2025-06-14T23:12:50.401508135Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-14T23:12:50.405128151Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.619186ms grafana | logger=migrator t=2025-06-14T23:12:50.410211322Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-14T23:12:50.415520691Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.308159ms grafana | logger=migrator t=2025-06-14T23:12:50.420452428Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-14T23:12:50.421523392Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.070524ms grafana | logger=migrator t=2025-06-14T23:12:50.42616363Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-14T23:12:50.427438871Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.275151ms grafana | logger=migrator t=2025-06-14T23:12:50.435685684Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-14T23:12:50.436719786Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.033752ms grafana | logger=migrator t=2025-06-14T23:12:50.439641509Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-14T23:12:50.440685642Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.043773ms grafana | logger=migrator t=2025-06-14T23:12:50.443888474Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-14T23:12:50.445131014Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.24171ms grafana | logger=migrator t=2025-06-14T23:12:50.450189625Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-14T23:12:50.451199207Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.009502ms grafana | logger=migrator t=2025-06-14T23:12:50.454411989Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-14T23:12:50.455451123Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.038964ms grafana | logger=migrator t=2025-06-14T23:12:50.459097069Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-14T23:12:50.460386979Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.291741ms grafana | logger=migrator t=2025-06-14T23:12:50.464639325Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-14T23:12:50.465714609Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.077554ms grafana | logger=migrator t=2025-06-14T23:12:50.472055021Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-14T23:12:50.472617339Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=557.118µs grafana | logger=migrator t=2025-06-14T23:12:50.495975802Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-14T23:12:50.496565491Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=589.769µs grafana | logger=migrator t=2025-06-14T23:12:50.501401695Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-14T23:12:50.503863983Z level=info msg="Migration successfully executed" id="create tag table" duration=2.461348ms grafana | logger=migrator t=2025-06-14T23:12:50.508542353Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-14T23:12:50.510314648Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.771485ms grafana | logger=migrator t=2025-06-14T23:12:50.517758796Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-14T23:12:50.519094608Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.335892ms grafana | logger=migrator t=2025-06-14T23:12:50.525837313Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-14T23:12:50.526987059Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.149936ms grafana | logger=migrator t=2025-06-14T23:12:50.534693255Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-14T23:12:50.535880142Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.186647ms grafana | logger=migrator t=2025-06-14T23:12:50.540613683Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:50.556576981Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.962418ms grafana | logger=migrator t=2025-06-14T23:12:50.559458663Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-14T23:12:50.560084533Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=625.33µs grafana | logger=migrator t=2025-06-14T23:12:50.563193512Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-14T23:12:50.564247005Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.052814ms grafana | logger=migrator t=2025-06-14T23:12:50.568894513Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:50.569319666Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=424.343µs grafana | logger=migrator t=2025-06-14T23:12:50.573239362Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:50.574400998Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.161066ms grafana | logger=migrator t=2025-06-14T23:12:50.578559811Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-14T23:12:50.579386077Z level=info msg="Migration successfully executed" id="create user auth table" duration=825.766µs grafana | logger=migrator t=2025-06-14T23:12:50.586559025Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-14T23:12:50.588109735Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.54976ms grafana | logger=migrator t=2025-06-14T23:12:50.592294367Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-14T23:12:50.59238019Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=90.513µs grafana | logger=migrator t=2025-06-14T23:12:50.596264874Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.601951055Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.683661ms grafana | logger=migrator t=2025-06-14T23:12:50.607549573Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.613068309Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.518096ms grafana | logger=migrator t=2025-06-14T23:12:50.616896951Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.622762317Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.864316ms grafana | logger=migrator t=2025-06-14T23:12:50.626728594Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.632332892Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.551427ms grafana | logger=migrator t=2025-06-14T23:12:50.637649982Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.638810498Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.158426ms grafana | logger=migrator t=2025-06-14T23:12:50.642565717Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.650203651Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.638284ms grafana | logger=migrator t=2025-06-14T23:12:50.673833573Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-14T23:12:50.68191797Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=8.082747ms grafana | logger=migrator t=2025-06-14T23:12:50.687466406Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-14T23:12:50.688438308Z level=info msg="Migration successfully executed" id="create server_lock table" duration=972.992µs grafana | logger=migrator t=2025-06-14T23:12:50.692139536Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-14T23:12:50.693400926Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.261081ms grafana | logger=migrator t=2025-06-14T23:12:50.696860546Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-14T23:12:50.697955381Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.094144ms grafana | logger=migrator t=2025-06-14T23:12:50.702614199Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-14T23:12:50.703727204Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.112435ms grafana | logger=migrator t=2025-06-14T23:12:50.708631221Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-14T23:12:50.709985923Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.353723ms grafana | logger=migrator t=2025-06-14T23:12:50.714479277Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-14T23:12:50.715604072Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.124436ms grafana | logger=migrator t=2025-06-14T23:12:50.721782559Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-14T23:12:50.72778295Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.000901ms grafana | logger=migrator t=2025-06-14T23:12:50.731697654Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-14T23:12:50.732814561Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.115916ms grafana | logger=migrator t=2025-06-14T23:12:50.736192488Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-14T23:12:50.742166318Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.97361ms grafana | logger=migrator t=2025-06-14T23:12:50.7469468Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-14T23:12:50.747992373Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.021912ms grafana | logger=migrator t=2025-06-14T23:12:50.752800627Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-14T23:12:50.754127068Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.143576ms grafana | logger=migrator t=2025-06-14T23:12:50.759458548Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-14T23:12:50.760870964Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.411625ms grafana | logger=migrator t=2025-06-14T23:12:50.766617096Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-14T23:12:50.768603969Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.985623ms grafana | logger=migrator t=2025-06-14T23:12:50.773476064Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-14T23:12:50.773542426Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=68.812µs grafana | logger=migrator t=2025-06-14T23:12:50.779932319Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-14T23:12:50.780158086Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=226.037µs grafana | logger=migrator t=2025-06-14T23:12:50.784403402Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-14T23:12:50.786088256Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.683784ms grafana | logger=migrator t=2025-06-14T23:12:50.792239652Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T23:12:50.793496661Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.253039ms grafana | logger=migrator t=2025-06-14T23:12:50.797036625Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T23:12:50.798357176Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.319611ms grafana | logger=migrator t=2025-06-14T23:12:50.802288201Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T23:12:50.802410245Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=80.552µs grafana | logger=migrator t=2025-06-14T23:12:50.810498552Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T23:12:50.812690162Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.19631ms grafana | logger=migrator t=2025-06-14T23:12:50.816973329Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T23:12:50.818432475Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.457966ms grafana | logger=migrator t=2025-06-14T23:12:50.848196853Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-14T23:12:50.850222507Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.024703ms grafana | logger=migrator t=2025-06-14T23:12:50.855848286Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-14T23:12:50.857733386Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.88316ms grafana | logger=migrator t=2025-06-14T23:12:50.863980154Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-14T23:12:50.870569745Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.586201ms grafana | logger=migrator t=2025-06-14T23:12:50.875423769Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-14T23:12:50.876443112Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.019064ms grafana | logger=migrator t=2025-06-14T23:12:50.880412058Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-14T23:12:50.880759388Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=347.451µs grafana | logger=migrator t=2025-06-14T23:12:50.885268832Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-14T23:12:50.887163583Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.893951ms grafana | logger=migrator t=2025-06-14T23:12:50.891841041Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-14T23:12:50.893147913Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.305732ms grafana | logger=migrator t=2025-06-14T23:12:50.898320768Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-14T23:12:50.900162996Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.838098ms grafana | logger=migrator t=2025-06-14T23:12:50.904219736Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T23:12:50.90434504Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=123.314µs grafana | logger=migrator t=2025-06-14T23:12:50.908662347Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-14T23:12:50.90967821Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.013883ms grafana | logger=migrator t=2025-06-14T23:12:50.913348456Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-14T23:12:50.915075481Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.728625ms grafana | logger=migrator t=2025-06-14T23:12:50.919855663Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-14T23:12:50.920876946Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.020973ms grafana | logger=migrator t=2025-06-14T23:12:50.924094648Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-14T23:12:50.925133511Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.038083ms grafana | logger=migrator t=2025-06-14T23:12:50.928789468Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-14T23:12:50.935032316Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.241568ms grafana | logger=migrator t=2025-06-14T23:12:50.93956946Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T23:12:50.940660856Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.093195ms grafana | logger=migrator t=2025-06-14T23:12:50.944351563Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T23:12:50.945390695Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.038632ms grafana | logger=migrator t=2025-06-14T23:12:50.950353014Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-14T23:12:50.977773126Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.419292ms grafana | logger=migrator t=2025-06-14T23:12:50.983243341Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-14T23:12:51.01286279Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.617669ms grafana | logger=migrator t=2025-06-14T23:12:51.027383865Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T23:12:51.029019926Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.638111ms grafana | logger=migrator t=2025-06-14T23:12:51.034023783Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-14T23:12:51.035126618Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.102545ms grafana | logger=migrator t=2025-06-14T23:12:51.038868205Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-14T23:12:51.045372159Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.503604ms grafana | logger=migrator t=2025-06-14T23:12:51.049082805Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-14T23:12:51.053789073Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.705518ms grafana | logger=migrator t=2025-06-14T23:12:51.05943986Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:51.060569696Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.129566ms grafana | logger=migrator t=2025-06-14T23:12:51.064251431Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-14T23:12:51.065394796Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.142145ms grafana | logger=migrator t=2025-06-14T23:12:51.070327761Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-14T23:12:51.071439346Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.114455ms grafana | logger=migrator t=2025-06-14T23:12:51.075203634Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-14T23:12:51.076266797Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.062913ms grafana | logger=migrator t=2025-06-14T23:12:51.081403469Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T23:12:51.08146415Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=61.521µs grafana | logger=migrator t=2025-06-14T23:12:51.085491856Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.092377543Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.884577ms grafana | logger=migrator t=2025-06-14T23:12:51.09739666Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.104135021Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.734641ms grafana | logger=migrator t=2025-06-14T23:12:51.108062915Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.116076465Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=8.047551ms grafana | logger=migrator t=2025-06-14T23:12:51.119777632Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-14T23:12:51.120819154Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.037482ms grafana | logger=migrator t=2025-06-14T23:12:51.125623785Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-14T23:12:51.126973327Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.347872ms grafana | logger=migrator t=2025-06-14T23:12:51.131734707Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.138506338Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.774662ms grafana | logger=migrator t=2025-06-14T23:12:51.143725612Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.148087149Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.360937ms grafana | logger=migrator t=2025-06-14T23:12:51.151587589Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-14T23:12:51.152398694Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=810.515µs grafana | logger=migrator t=2025-06-14T23:12:51.156241735Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:51.162910324Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.667709ms grafana | logger=migrator t=2025-06-14T23:12:51.168261982Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:51.172783974Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.521412ms grafana | logger=migrator t=2025-06-14T23:12:51.199652456Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:51.199727838Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=76.953µs grafana | logger=migrator t=2025-06-14T23:12:51.204361634Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:51.206198181Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.835368ms grafana | logger=migrator t=2025-06-14T23:12:51.210420343Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-14T23:12:51.211518397Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.097784ms grafana | logger=migrator t=2025-06-14T23:12:51.216257946Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-14T23:12:51.218044133Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.785406ms grafana | logger=migrator t=2025-06-14T23:12:51.224162914Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-14T23:12:51.224183855Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=21.321µs grafana | logger=migrator t=2025-06-14T23:12:51.227919202Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:51.235918353Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.99849ms grafana | logger=migrator t=2025-06-14T23:12:51.240717813Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:51.247218797Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.500024ms grafana | logger=migrator t=2025-06-14T23:12:51.250759318Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:51.257221071Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.458364ms grafana | logger=migrator t=2025-06-14T23:12:51.262761795Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:51.270065333Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.303119ms grafana | logger=migrator t=2025-06-14T23:12:51.27506072Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-14T23:12:51.279737407Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.676067ms grafana | logger=migrator t=2025-06-14T23:12:51.28656207Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:51.286582421Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=52.122µs grafana | logger=migrator t=2025-06-14T23:12:51.292114564Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-14T23:12:51.293713095Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.597581ms grafana | logger=migrator t=2025-06-14T23:12:51.300142636Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.306855367Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.710481ms grafana | logger=migrator t=2025-06-14T23:12:51.313932188Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-14T23:12:51.3139653Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=50.231µs grafana | logger=migrator t=2025-06-14T23:12:51.318025026Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.325351486Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.32557ms grafana | logger=migrator t=2025-06-14T23:12:51.329638871Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-14T23:12:51.330749596Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.102314ms grafana | logger=migrator t=2025-06-14T23:12:51.337043863Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.347620035Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.576522ms grafana | logger=migrator t=2025-06-14T23:12:51.368143018Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-14T23:12:51.37044296Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=2.299662ms grafana | logger=migrator t=2025-06-14T23:12:51.375243561Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-14T23:12:51.376462829Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.219527ms grafana | logger=migrator t=2025-06-14T23:12:51.381567289Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.388365702Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.797323ms grafana | logger=migrator t=2025-06-14T23:12:51.393390459Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-14T23:12:51.394604988Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.214539ms grafana | logger=migrator t=2025-06-14T23:12:51.398688635Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-14T23:12:51.399949765Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.26053ms grafana | logger=migrator t=2025-06-14T23:12:51.404867389Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-14T23:12:51.405802389Z level=info msg="Migration successfully executed" id="create alert_image table" duration=936.38µs grafana | logger=migrator t=2025-06-14T23:12:51.409875336Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-14T23:12:51.410994152Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.118526ms grafana | logger=migrator t=2025-06-14T23:12:51.414635086Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-14T23:12:51.414652616Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=18.09µs grafana | logger=migrator t=2025-06-14T23:12:51.420112287Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-14T23:12:51.422405149Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.299792ms grafana | logger=migrator t=2025-06-14T23:12:51.429649936Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.430791962Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.141696ms grafana | logger=migrator t=2025-06-14T23:12:51.43422953Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-14T23:12:51.434752306Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-14T23:12:51.439218017Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-14T23:12:51.439784384Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=569.188µs grafana | logger=migrator t=2025-06-14T23:12:51.442998425Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-14T23:12:51.444182132Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.182737ms grafana | logger=migrator t=2025-06-14T23:12:51.448769326Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-14T23:12:51.455873769Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.109264ms grafana | logger=migrator t=2025-06-14T23:12:51.461686771Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-14T23:12:51.462811096Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.123405ms grafana | logger=migrator t=2025-06-14T23:12:51.46675297Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-14T23:12:51.468004069Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.250229ms grafana | logger=migrator t=2025-06-14T23:12:51.471603111Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-14T23:12:51.472561792Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=957.691µs grafana | logger=migrator t=2025-06-14T23:12:51.477853438Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-14T23:12:51.479130668Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.277109ms grafana | logger=migrator t=2025-06-14T23:12:51.483242036Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:51.484501386Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.25893ms grafana | logger=migrator t=2025-06-14T23:12:51.488648986Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-14T23:12:51.488679767Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.321µs grafana | logger=migrator t=2025-06-14T23:12:51.496229944Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-14T23:12:51.496269395Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=40.722µs grafana | logger=migrator t=2025-06-14T23:12:51.500302292Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-14T23:12:51.510726078Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.424646ms grafana | logger=migrator t=2025-06-14T23:12:51.51460887Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-14T23:12:51.515019423Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=410.513µs grafana | logger=migrator t=2025-06-14T23:12:51.518410459Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-14T23:12:51.519625067Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.211797ms grafana | logger=migrator t=2025-06-14T23:12:51.545158778Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-14T23:12:51.545777067Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=617.469µs grafana | logger=migrator t=2025-06-14T23:12:51.549764822Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-14T23:12:51.550836606Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.071134ms grafana | logger=migrator t=2025-06-14T23:12:51.554436029Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-14T23:12:51.555359327Z level=info msg="Migration successfully executed" id="create secrets table" duration=923.049µs grafana | logger=migrator t=2025-06-14T23:12:51.560599862Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-14T23:12:51.592372267Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.772195ms grafana | logger=migrator t=2025-06-14T23:12:51.597951862Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-14T23:12:51.605171519Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.219667ms grafana | logger=migrator t=2025-06-14T23:12:51.60904305Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-14T23:12:51.609225486Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=181.796µs grafana | logger=migrator t=2025-06-14T23:12:51.613946184Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-14T23:12:51.651649066Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.701312ms grafana | logger=migrator t=2025-06-14T23:12:51.65717365Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-14T23:12:51.68972367Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.545729ms grafana | logger=migrator t=2025-06-14T23:12:51.71330923Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-14T23:12:51.714843117Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.535088ms grafana | logger=migrator t=2025-06-14T23:12:51.722563179Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-14T23:12:51.723608852Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.051153ms grafana | logger=migrator t=2025-06-14T23:12:51.727961699Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-14T23:12:51.728225597Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=263.668µs grafana | logger=migrator t=2025-06-14T23:12:51.731694765Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-14T23:12:51.732532161Z level=info msg="Migration successfully executed" id="create permission table" duration=836.706µs grafana | logger=migrator t=2025-06-14T23:12:51.740203402Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-14T23:12:51.74172662Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.527988ms grafana | logger=migrator t=2025-06-14T23:12:51.745256771Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-14T23:12:51.746391346Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.134035ms grafana | logger=migrator t=2025-06-14T23:12:51.7519417Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-14T23:12:51.753111727Z level=info msg="Migration successfully executed" id="create role table" duration=1.171317ms grafana | logger=migrator t=2025-06-14T23:12:51.759241019Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-14T23:12:51.767139247Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.899208ms grafana | logger=migrator t=2025-06-14T23:12:51.771337688Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-14T23:12:51.776564072Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.226074ms grafana | logger=migrator t=2025-06-14T23:12:51.779907407Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-14T23:12:51.78094337Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.035253ms grafana | logger=migrator t=2025-06-14T23:12:51.785376958Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-14T23:12:51.786957898Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.57953ms grafana | logger=migrator t=2025-06-14T23:12:51.790349994Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:51.791976406Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.625232ms grafana | logger=migrator t=2025-06-14T23:12:51.796114175Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-14T23:12:51.797026903Z level=info msg="Migration successfully executed" id="create team role table" duration=912.028µs grafana | logger=migrator t=2025-06-14T23:12:51.80522785Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-14T23:12:51.806496761Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.272291ms grafana | logger=migrator t=2025-06-14T23:12:51.810379152Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-14T23:12:51.812315623Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.935621ms grafana | logger=migrator t=2025-06-14T23:12:51.817260608Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-14T23:12:51.819084115Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.823597ms grafana | logger=migrator t=2025-06-14T23:12:51.827474898Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-14T23:12:51.828830141Z level=info msg="Migration successfully executed" id="create user role table" duration=1.356773ms grafana | logger=migrator t=2025-06-14T23:12:51.83265338Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-14T23:12:51.833849349Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.192428ms grafana | logger=migrator t=2025-06-14T23:12:51.837260145Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-14T23:12:51.838452542Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.191157ms grafana | logger=migrator t=2025-06-14T23:12:51.844828043Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-14T23:12:51.846091002Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.265829ms grafana | logger=migrator t=2025-06-14T23:12:51.849344724Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-14T23:12:51.850068296Z level=info msg="Migration successfully executed" id="create builtin role table" duration=723.912µs grafana | logger=migrator t=2025-06-14T23:12:51.854224067Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-14T23:12:51.855701093Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.479466ms grafana | logger=migrator t=2025-06-14T23:12:51.860304097Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-14T23:12:51.861378351Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.074024ms grafana | logger=migrator t=2025-06-14T23:12:51.886401715Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-14T23:12:51.895742889Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.341064ms grafana | logger=migrator t=2025-06-14T23:12:51.899598759Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-14T23:12:51.900376263Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=779.124µs grafana | logger=migrator t=2025-06-14T23:12:51.90825042Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-14T23:12:51.909784819Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.529949ms grafana | logger=migrator t=2025-06-14T23:12:51.913518086Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:51.914833037Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.314481ms grafana | logger=migrator t=2025-06-14T23:12:51.921041992Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-14T23:12:51.922152506Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.107814ms grafana | logger=migrator t=2025-06-14T23:12:51.925283334Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-14T23:12:51.926157772Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=871.458µs grafana | logger=migrator t=2025-06-14T23:12:51.929302671Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-14T23:12:51.930491608Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.188387ms grafana | logger=migrator t=2025-06-14T23:12:51.940604295Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-14T23:12:51.948552385Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.95225ms grafana | logger=migrator t=2025-06-14T23:12:51.952204899Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-14T23:12:51.959256549Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.05208ms grafana | logger=migrator t=2025-06-14T23:12:51.962584014Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-14T23:12:51.970789042Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.203718ms grafana | logger=migrator t=2025-06-14T23:12:51.977833662Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-14T23:12:51.988380703Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.539481ms grafana | logger=migrator t=2025-06-14T23:12:51.992211813Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-14T23:12:51.993035768Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=822.655µs grafana | logger=migrator t=2025-06-14T23:12:52.000169352Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-14T23:12:52.001256766Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.086534ms grafana | logger=migrator t=2025-06-14T23:12:52.006874422Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-14T23:12:52.008567006Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.691624ms grafana | logger=migrator t=2025-06-14T23:12:52.012010555Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-14T23:12:52.023667571Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=11.646816ms grafana | logger=migrator t=2025-06-14T23:12:52.031815348Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-14T23:12:52.037707323Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=5.886945ms grafana | logger=migrator t=2025-06-14T23:12:52.059924852Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-14T23:12:52.061568454Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.643992ms grafana | logger=migrator t=2025-06-14T23:12:52.067412348Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-14T23:12:52.068321497Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=908.598µs grafana | logger=migrator t=2025-06-14T23:12:52.072717165Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-14T23:12:52.07443421Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.715865ms grafana | logger=migrator t=2025-06-14T23:12:52.078156587Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-14T23:12:52.078184137Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=27.411µs grafana | logger=migrator t=2025-06-14T23:12:52.082858265Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-14T23:12:52.083742162Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=884.037µs grafana | logger=migrator t=2025-06-14T23:12:52.089560035Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-14T23:12:52.089672259Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=112.874µs grafana | logger=migrator t=2025-06-14T23:12:52.095083649Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-14T23:12:52.095734019Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=647.92µs grafana | logger=migrator t=2025-06-14T23:12:52.099811038Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-14T23:12:52.100867441Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.057503ms grafana | logger=migrator t=2025-06-14T23:12:52.104573457Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-14T23:12:52.105987172Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.22809ms grafana | logger=migrator t=2025-06-14T23:12:52.112803227Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-14T23:12:52.113089455Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=288.109µs grafana | logger=migrator t=2025-06-14T23:12:52.120329814Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-14T23:12:52.120962583Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=631.869µs grafana | logger=migrator t=2025-06-14T23:12:52.126498448Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-14T23:12:52.127962144Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.474877ms grafana | logger=migrator t=2025-06-14T23:12:52.133020093Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-14T23:12:52.134160759Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.140067ms grafana | logger=migrator t=2025-06-14T23:12:52.13832935Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-14T23:12:52.150365349Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.026769ms grafana | logger=migrator t=2025-06-14T23:12:52.155585033Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-14T23:12:52.155620464Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=36.932µs grafana | logger=migrator t=2025-06-14T23:12:52.159240508Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-14T23:12:52.161059265Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.817377ms grafana | logger=migrator t=2025-06-14T23:12:52.164773732Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-14T23:12:52.166098614Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.324982ms grafana | logger=migrator t=2025-06-14T23:12:52.170573285Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-14T23:12:52.17170011Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.126345ms grafana | logger=migrator t=2025-06-14T23:12:52.176478111Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-14T23:12:52.189418858Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.939567ms grafana | logger=migrator t=2025-06-14T23:12:52.192551907Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.193459615Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=907.068µs grafana | logger=migrator t=2025-06-14T23:12:52.196734668Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.198550655Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.816357ms grafana | logger=migrator t=2025-06-14T23:12:52.203808211Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:52.225636707Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.829326ms grafana | logger=migrator t=2025-06-14T23:12:52.232026159Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-14T23:12:52.233470304Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.443785ms grafana | logger=migrator t=2025-06-14T23:12:52.238566514Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.239744522Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.177408ms grafana | logger=migrator t=2025-06-14T23:12:52.242633713Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.243786519Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.152256ms grafana | logger=migrator t=2025-06-14T23:12:52.246827194Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-14T23:12:52.248013671Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.189407ms grafana | logger=migrator t=2025-06-14T23:12:52.253303688Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:52.253664109Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=359.491µs grafana | logger=migrator t=2025-06-14T23:12:52.259410061Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:52.26035459Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=944.089µs grafana | logger=migrator t=2025-06-14T23:12:52.263700695Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-14T23:12:52.272047858Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.346933ms grafana | logger=migrator t=2025-06-14T23:12:52.274897448Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-14T23:12:52.28133636Z level=info msg="Migration successfully executed" id="add type column" duration=6.437302ms grafana | logger=migrator t=2025-06-14T23:12:52.286242555Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-14T23:12:52.287432323Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.188998ms grafana | logger=migrator t=2025-06-14T23:12:52.29083733Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-14T23:12:52.291915713Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.077643ms grafana | logger=migrator t=2025-06-14T23:12:52.296846769Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.297368505Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.301413193Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.30196563Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.305042807Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-14T23:12:52.305890923Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=850.207µs grafana | logger=migrator t=2025-06-14T23:12:52.313608936Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-14T23:12:52.315087732Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.478646ms grafana | logger=migrator t=2025-06-14T23:12:52.32388727Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.324807639Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=919.688µs grafana | logger=migrator t=2025-06-14T23:12:52.329713773Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-14T23:12:52.330958472Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.242429ms grafana | logger=migrator t=2025-06-14T23:12:52.337001202Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.338439338Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.436335ms grafana | logger=migrator t=2025-06-14T23:12:52.343388484Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.344341793Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=952.939µs grafana | logger=migrator t=2025-06-14T23:12:52.347710549Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-14T23:12:52.348563936Z level=info msg="Migration successfully executed" id="Drop public config table" duration=850.697µs grafana | logger=migrator t=2025-06-14T23:12:52.35442562Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-14T23:12:52.355453253Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.026993ms grafana | logger=migrator t=2025-06-14T23:12:52.359219781Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.360184082Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=963.521µs grafana | logger=migrator t=2025-06-14T23:12:52.367986837Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:52.36903181Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.044443ms grafana | logger=migrator t=2025-06-14T23:12:52.37633892Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-14T23:12:52.377224188Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=884.508µs grafana | logger=migrator t=2025-06-14T23:12:52.380344066Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-14T23:12:52.400708447Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.363961ms grafana | logger=migrator t=2025-06-14T23:12:52.406554111Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-14T23:12:52.413424598Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.869257ms grafana | logger=migrator t=2025-06-14T23:12:52.417907678Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-14T23:12:52.426452917Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.545059ms grafana | logger=migrator t=2025-06-14T23:12:52.429430461Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-14T23:12:52.42972546Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=294.689µs grafana | logger=migrator t=2025-06-14T23:12:52.437447113Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-14T23:12:52.443862135Z level=info msg="Migration successfully executed" id="add share column" duration=6.414212ms grafana | logger=migrator t=2025-06-14T23:12:52.447109198Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-14T23:12:52.447368276Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=258.838µs grafana | logger=migrator t=2025-06-14T23:12:52.451302109Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-14T23:12:52.453818298Z level=info msg="Migration successfully executed" id="create file table" duration=2.515559ms grafana | logger=migrator t=2025-06-14T23:12:52.45992554Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-14T23:12:52.467375085Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=7.448095ms grafana | logger=migrator t=2025-06-14T23:12:52.474259101Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-14T23:12:52.478584938Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=4.327277ms grafana | logger=migrator t=2025-06-14T23:12:52.488243291Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-14T23:12:52.488902852Z level=info msg="Migration successfully executed" id="create file_meta table" duration=660.581µs grafana | logger=migrator t=2025-06-14T23:12:52.497564945Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-14T23:12:52.498718971Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.153106ms grafana | logger=migrator t=2025-06-14T23:12:52.507351963Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-14T23:12:52.507369484Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=18.36µs grafana | logger=migrator t=2025-06-14T23:12:52.512497975Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-14T23:12:52.512558317Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=60.192µs grafana | logger=migrator t=2025-06-14T23:12:52.516960835Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-14T23:12:52.517642577Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=681.632µs grafana | logger=migrator t=2025-06-14T23:12:52.523811011Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-14T23:12:52.524024748Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=212.947µs grafana | logger=migrator t=2025-06-14T23:12:52.52885526Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-14T23:12:52.530402648Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.545398ms grafana | logger=migrator t=2025-06-14T23:12:52.536099187Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-14T23:12:52.546425173Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.316255ms grafana | logger=migrator t=2025-06-14T23:12:52.565000878Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-14T23:12:52.565212494Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=211.746µs grafana | logger=migrator t=2025-06-14T23:12:52.585852583Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-14T23:12:52.588048953Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.19598ms grafana | logger=migrator t=2025-06-14T23:12:52.592701149Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-14T23:12:52.593178414Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=477.005µs grafana | logger=migrator t=2025-06-14T23:12:52.596176529Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-14T23:12:52.596428666Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=251.668µs grafana | logger=migrator t=2025-06-14T23:12:52.601176686Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-14T23:12:52.602244399Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.066773ms grafana | logger=migrator t=2025-06-14T23:12:52.608973281Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-14T23:12:52.615784806Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.810885ms grafana | logger=migrator t=2025-06-14T23:12:52.620480943Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-14T23:12:52.627555966Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.074103ms grafana | logger=migrator t=2025-06-14T23:12:52.631030235Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-14T23:12:52.6318474Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=815.965µs grafana | logger=migrator t=2025-06-14T23:12:52.639087889Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-14T23:12:52.715428461Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.328211ms grafana | logger=migrator t=2025-06-14T23:12:52.735205913Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-14T23:12:52.736975239Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.771236ms grafana | logger=migrator t=2025-06-14T23:12:52.744752614Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-14T23:12:52.746262881Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.509527ms grafana | logger=migrator t=2025-06-14T23:12:52.751571918Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-14T23:12:52.77893676Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=27.364522ms grafana | logger=migrator t=2025-06-14T23:12:52.784476364Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-14T23:12:52.794118247Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.644463ms grafana | logger=migrator t=2025-06-14T23:12:52.798185216Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-14T23:12:52.798449214Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=264.228µs grafana | logger=migrator t=2025-06-14T23:12:52.801738227Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-14T23:12:52.802111869Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=373.252µs grafana | logger=migrator t=2025-06-14T23:12:52.812523637Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-14T23:12:52.81293762Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=418.913µs grafana | logger=migrator t=2025-06-14T23:12:52.818266597Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-14T23:12:52.818526765Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=260.078µs grafana | logger=migrator t=2025-06-14T23:12:52.823189823Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-14T23:12:52.823455511Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=265.719µs grafana | logger=migrator t=2025-06-14T23:12:52.826830817Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-14T23:12:52.82789403Z level=info msg="Migration successfully executed" id="create folder table" duration=1.062493ms grafana | logger=migrator t=2025-06-14T23:12:52.832593428Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-14T23:12:52.833745895Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.151847ms grafana | logger=migrator t=2025-06-14T23:12:52.840569569Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-14T23:12:52.841708485Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.137256ms grafana | logger=migrator t=2025-06-14T23:12:52.84663139Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-14T23:12:52.846661391Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.051µs grafana | logger=migrator t=2025-06-14T23:12:52.85012653Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-14T23:12:52.851254186Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.126406ms grafana | logger=migrator t=2025-06-14T23:12:52.856405668Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-14T23:12:52.857563514Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.153886ms grafana | logger=migrator t=2025-06-14T23:12:52.862929073Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-14T23:12:52.864032477Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.102864ms grafana | logger=migrator t=2025-06-14T23:12:52.867836228Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-14T23:12:52.868302772Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=465.844µs grafana | logger=migrator t=2025-06-14T23:12:52.871612156Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-14T23:12:52.871866384Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=256.938µs grafana | logger=migrator t=2025-06-14T23:12:52.87934843Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-14T23:12:52.88063216Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.28779ms grafana | logger=migrator t=2025-06-14T23:12:52.885297107Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-14T23:12:52.886617898Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.315151ms grafana | logger=migrator t=2025-06-14T23:12:52.900825745Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-14T23:12:52.902591151Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.764766ms grafana | logger=migrator t=2025-06-14T23:12:52.90860468Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-14T23:12:52.90984358Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.238649ms grafana | logger=migrator t=2025-06-14T23:12:52.914504056Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-14T23:12:52.916174359Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.670523ms grafana | logger=migrator t=2025-06-14T23:12:52.921703853Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-14T23:12:52.922767676Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.065564ms grafana | logger=migrator t=2025-06-14T23:12:52.926536085Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-14T23:12:52.927446773Z level=info msg="Migration successfully executed" id="create anon_device table" duration=913.408µs grafana | logger=migrator t=2025-06-14T23:12:52.93242799Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-14T23:12:52.933569406Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.141286ms grafana | logger=migrator t=2025-06-14T23:12:52.939980777Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-14T23:12:52.941849746Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.865539ms grafana | logger=migrator t=2025-06-14T23:12:52.946364929Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-14T23:12:52.94737131Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.007181ms grafana | logger=migrator t=2025-06-14T23:12:52.950762947Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-14T23:12:52.951888803Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.124726ms grafana | logger=migrator t=2025-06-14T23:12:52.955719523Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-14T23:12:52.957827789Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.110996ms grafana | logger=migrator t=2025-06-14T23:12:52.964970814Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-14T23:12:52.965386948Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=416.724µs grafana | logger=migrator t=2025-06-14T23:12:52.969984852Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-14T23:12:52.982801625Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.817243ms grafana | logger=migrator t=2025-06-14T23:12:52.987754982Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-14T23:12:52.988936448Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.183387ms grafana | logger=migrator t=2025-06-14T23:12:52.993469221Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-14T23:12:52.993524703Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=59.812µs grafana | logger=migrator t=2025-06-14T23:12:52.998723436Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-14T23:12:53.000261165Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.537899ms grafana | logger=migrator t=2025-06-14T23:12:53.003581899Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-14T23:12:53.003759025Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=175.815µs grafana | logger=migrator t=2025-06-14T23:12:53.007582125Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-14T23:12:53.009959779Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.371494ms grafana | logger=migrator t=2025-06-14T23:12:53.015815835Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-14T23:12:53.017110685Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.29372ms grafana | logger=migrator t=2025-06-14T23:12:53.021725901Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-14T23:12:53.023878757Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.150317ms grafana | logger=migrator t=2025-06-14T23:12:53.028938697Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-14T23:12:53.030543148Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.608871ms grafana | logger=migrator t=2025-06-14T23:12:53.035534115Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-14T23:12:53.03665534Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.121215ms grafana | logger=migrator t=2025-06-14T23:12:53.041200913Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-14T23:12:53.041700939Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=499.236µs grafana | logger=migrator t=2025-06-14T23:12:53.046663305Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-14T23:12:53.047759889Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.096584ms grafana | logger=migrator t=2025-06-14T23:12:53.064849857Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-14T23:12:53.066588982Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.736735ms grafana | logger=migrator t=2025-06-14T23:12:53.071013641Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-14T23:12:53.072305652Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.292741ms grafana | logger=migrator t=2025-06-14T23:12:53.077172435Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-14T23:12:53.08719705Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.023615ms grafana | logger=migrator t=2025-06-14T23:12:53.091876028Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-14T23:12:53.101411358Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.53425ms grafana | logger=migrator t=2025-06-14T23:12:53.105710704Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-14T23:12:53.117545976Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=11.834743ms grafana | logger=migrator t=2025-06-14T23:12:53.12119509Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-14T23:12:53.128913943Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.717783ms grafana | logger=migrator t=2025-06-14T23:12:53.135776669Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-14T23:12:53.136171692Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=406.343µs grafana | logger=migrator t=2025-06-14T23:12:53.143882844Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-14T23:12:53.145174135Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.291231ms grafana | logger=migrator t=2025-06-14T23:12:53.149763279Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-14T23:12:53.160563629Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.79935ms grafana | logger=migrator t=2025-06-14T23:12:53.17041172Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-14T23:12:53.170899265Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=488.656µs grafana | logger=migrator t=2025-06-14T23:12:53.175175469Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-14T23:12:53.177366328Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.190519ms grafana | logger=migrator t=2025-06-14T23:12:53.181070025Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:53.220364832Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=39.284556ms grafana | logger=migrator t=2025-06-14T23:12:53.234097933Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-14T23:12:53.235678293Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.57895ms grafana | logger=migrator t=2025-06-14T23:12:53.240499265Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:53.242530778Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.031023ms grafana | logger=migrator t=2025-06-14T23:12:53.248196007Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:53.24859468Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=397.653µs grafana | logger=migrator t=2025-06-14T23:12:53.252294946Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:53.253481273Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.183897ms grafana | logger=migrator t=2025-06-14T23:12:53.260233216Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-14T23:12:53.284767918Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=24.536652ms grafana | logger=migrator t=2025-06-14T23:12:53.288847536Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-14T23:12:53.28959133Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=726.523µs grafana | logger=migrator t=2025-06-14T23:12:53.293854284Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-14T23:12:53.294743532Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=885.778µs grafana | logger=migrator t=2025-06-14T23:12:53.301796784Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-14T23:12:53.302472185Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=673.671µs grafana | logger=migrator t=2025-06-14T23:12:53.307551295Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-14T23:12:53.308917988Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.366033ms grafana | logger=migrator t=2025-06-14T23:12:53.319244833Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-14T23:12:53.331834449Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.594926ms grafana | logger=migrator t=2025-06-14T23:12:53.337205568Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-14T23:12:53.347217923Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=10.009425ms grafana | logger=migrator t=2025-06-14T23:12:53.354199603Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-14T23:12:53.363732233Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.53123ms grafana | logger=migrator t=2025-06-14T23:12:53.367659446Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-14T23:12:53.377069092Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.404706ms grafana | logger=migrator t=2025-06-14T23:12:53.381339277Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-14T23:12:53.39065589Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.317123ms grafana | logger=migrator t=2025-06-14T23:12:53.417259158Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-14T23:12:53.429396239Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=12.138241ms grafana | logger=migrator t=2025-06-14T23:12:53.433484198Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-14T23:12:53.434195651Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=711.142µs grafana | logger=migrator t=2025-06-14T23:12:53.439001362Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-14T23:12:53.474119737Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.118865ms grafana | logger=migrator t=2025-06-14T23:12:53.477926886Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-14T23:12:53.485410572Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.482806ms grafana | logger=migrator t=2025-06-14T23:12:53.491108251Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-14T23:12:53.50028442Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.176079ms grafana | logger=migrator t=2025-06-14T23:12:53.507136736Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-14T23:12:53.519415502Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=12.281466ms grafana | logger=migrator t=2025-06-14T23:12:53.523879723Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-14T23:12:53.533718482Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.837649ms grafana | logger=migrator t=2025-06-14T23:12:53.538641557Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-14T23:12:53.538670728Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=32.351µs grafana | logger=migrator t=2025-06-14T23:12:53.543474539Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-14T23:12:53.543507851Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=30.801µs grafana | logger=migrator t=2025-06-14T23:12:53.548410685Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:53.558797081Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.385476ms grafana | logger=migrator t=2025-06-14T23:12:53.56253523Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.572319357Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.783608ms grafana | logger=migrator t=2025-06-14T23:12:53.585488792Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-14T23:12:53.586141412Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=651.81µs grafana | logger=migrator t=2025-06-14T23:12:53.592196432Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-14T23:12:53.592610425Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=412.943µs grafana | logger=migrator t=2025-06-14T23:12:53.597812399Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:53.609750426Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=11.938106ms grafana | logger=migrator t=2025-06-14T23:12:53.613962678Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.624023365Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.060578ms grafana | logger=migrator t=2025-06-14T23:12:53.627865645Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-14T23:12:53.636254749Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=8.388664ms grafana | logger=migrator t=2025-06-14T23:12:53.642380212Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-14T23:12:53.652532792Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=10.151699ms grafana | logger=migrator t=2025-06-14T23:12:53.656700622Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-14T23:12:53.657356084Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=655.202µs grafana | logger=migrator t=2025-06-14T23:12:53.660952907Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:53.670778905Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.827339ms grafana | logger=migrator t=2025-06-14T23:12:53.676241097Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.684692493Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=8.450376ms grafana | logger=migrator t=2025-06-14T23:12:53.688806093Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-14T23:12:53.689116683Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=311.56µs grafana | logger=migrator t=2025-06-14T23:12:53.694446821Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-14T23:12:53.69506285Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=613.149µs grafana | logger=migrator t=2025-06-14T23:12:53.699347075Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-14T23:12:53.701277445Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.92915ms grafana | logger=migrator t=2025-06-14T23:12:53.708589056Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-14T23:12:53.708623237Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=35.271µs grafana | logger=migrator t=2025-06-14T23:12:53.713865532Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-14T23:12:53.713893373Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=28.861µs grafana | logger=migrator t=2025-06-14T23:12:53.718117415Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-14T23:12:53.718670122Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=551.987µs grafana | logger=migrator t=2025-06-14T23:12:53.725540709Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.737732982Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.191823ms grafana | logger=migrator t=2025-06-14T23:12:53.761339336Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:53.774272412Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=12.933896ms grafana | logger=migrator t=2025-06-14T23:12:53.777948529Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-14T23:12:53.778805005Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=853.996µs grafana | logger=migrator t=2025-06-14T23:12:53.787629023Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-14T23:12:53.788909834Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.280391ms grafana | logger=migrator t=2025-06-14T23:12:53.794952014Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-14T23:12:53.807337213Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=12.345808ms grafana | logger=migrator t=2025-06-14T23:12:53.811646909Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.819984391Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=8.332782ms grafana | logger=migrator t=2025-06-14T23:12:53.824968028Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-14T23:12:53.825102752Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-14T23:12:53.825436763Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-14T23:12:53.825460954Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=492.807µs grafana | logger=migrator t=2025-06-14T23:12:53.828948593Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-14T23:12:53.829662636Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=713.372µs grafana | logger=migrator t=2025-06-14T23:12:53.83329366Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-14T23:12:53.834588771Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.294132ms grafana | logger=migrator t=2025-06-14T23:12:53.840211438Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-14T23:12:53.842405047Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.192159ms grafana | logger=migrator t=2025-06-14T23:12:53.850298735Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-14T23:12:53.852423412Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.122947ms grafana | logger=migrator t=2025-06-14T23:12:53.858046839Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-14T23:12:53.859168905Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.121696ms grafana | logger=migrator t=2025-06-14T23:12:53.862786548Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:53.872283177Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.496659ms grafana | logger=migrator t=2025-06-14T23:12:53.877698057Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:53.887126934Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.427987ms grafana | logger=migrator t=2025-06-14T23:12:53.897878402Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-14T23:12:53.909921242Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=12.000618ms grafana | logger=migrator t=2025-06-14T23:12:53.929618571Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-14T23:12:53.940602017Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.982956ms grafana | logger=migrator t=2025-06-14T23:12:53.946039228Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-14T23:12:53.946378879Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-14T23:12:53.946396869Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=357.951µs grafana | logger=migrator t=2025-06-14T23:12:53.949371303Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-14T23:12:53.950628742Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.254059ms grafana | logger=migrator t=2025-06-14T23:12:53.953611236Z level=info msg="migrations completed" performed=654 skipped=0 duration=7.024259041s grafana | logger=migrator t=2025-06-14T23:12:53.954322759Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-14T23:12:53.969793065Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-14T23:12:53.970095935Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-14T23:12:53.976851678Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-14T23:12:54.090586548Z level=info msg="Restored cache from database" duration=518.796µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.099120548Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-14T23:12:54.099146809Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-14T23:12:54.106938726Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-14T23:12:54.108633051Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=1.693625ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.113311398Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-14T23:12:54.113325579Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=14.491µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.117535073Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-14T23:12:54.117893634Z level=info msg="Migration successfully executed" id="drop table resource" duration=357.591µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.1247336Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-14T23:12:54.126682082Z level=info msg="Migration successfully executed" id="create table resource" duration=1.948142ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.132342562Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-14T23:12:54.133953683Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.611651ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.139052074Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.139133598Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=81.664µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.144469766Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.145713726Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.2433ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.152119329Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-14T23:12:54.153769822Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.650613ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.161209877Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-14T23:12:54.162524159Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.314002ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.168789958Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-14T23:12:54.168993834Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=204.557µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.176086979Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-14T23:12:54.178135114Z level=info msg="Migration successfully executed" id="create table resource_version" duration=2.050634ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.182579665Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-14T23:12:54.183823254Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.231158ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.190526997Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-14T23:12:54.190700232Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=177.115µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.19474421Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-14T23:12:54.196319701Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.575481ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.200162392Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-14T23:12:54.20165215Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.488648ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.208056393Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-14T23:12:54.209372035Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.315192ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.213936609Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.223840413Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=9.904254ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.228544452Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-14T23:12:54.237196097Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.649105ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.241518794Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-14T23:12:54.242459033Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=953.74µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.246093669Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-14T23:12:54.246988737Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=891.718µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.284008851Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.298097738Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=14.090046ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.303110087Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-14T23:12:54.314856069Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=11.745482ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.318487765Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-14T23:12:54.318537086Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-14T23:12:54.318943379Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=452.584µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.322865763Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-14T23:12:54.323798712Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=936.839µs grafana | logger=resource-migrator t=2025-06-14T23:12:54.330120273Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.343758276Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=13.638413ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.348540137Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-14T23:12:54.351223382Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.686335ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.355495488Z level=info msg="migrations completed" performed=26 skipped=0 duration=248.599363ms grafana | logger=resource-migrator t=2025-06-14T23:12:54.356112117Z level=info msg="Unlocking database" grafana | t=2025-06-14T23:12:54.356371155Z level=info caller=logger.go:214 time=2025-06-14T23:12:54.356340755Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-14T23:12:54.36785602Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-14T23:12:54.404742739Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-14T23:12:54.4047713Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-14T23:12:54.404794571Z level=info msg="Plugins loaded" count=53 duration=36.939412ms grafana | logger=query_data t=2025-06-14T23:12:54.409474569Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-14T23:12:54.414393506Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-14T23:12:54.42968497Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-14T23:12:54.451871783Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-14T23:12:54.451910605Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-14T23:12:54.456285214Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-14T23:12:54.456675876Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-14T23:12:54.456912334Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2025-06-14T23:12:54.459996671Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:54.464618218Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=http.server t=2025-06-14T23:12:54.465980311Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.state.manager t=2025-06-14T23:12:54.526291914Z level=info msg="State cache has been initialized" states=0 duration=69.615358ms grafana | logger=ngalert.scheduler t=2025-06-14T23:12:54.526336595Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-14T23:12:54.52650749Z level=info msg=starting first_tick=2025-06-14T23:13:00Z grafana | logger=grafana.update.checker t=2025-06-14T23:12:54.55300499Z level=info msg="Update check succeeded" duration=96.246761ms grafana | logger=plugins.update.checker t=2025-06-14T23:12:54.560269941Z level=info msg="Update check succeeded" duration=103.292416ms grafana | logger=provisioning.datasources t=2025-06-14T23:12:54.560510618Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-14T23:12:54.576717613Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-14T23:12:54.585103728Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-14T23:12:54.588368712Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=provisioning.alerting t=2025-06-14T23:12:54.61984389Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-14T23:12:54.619894631Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-14T23:12:54.622971049Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-14T23:12:54.666127228Z level=info msg="Patterns update finished" duration=103.913505ms grafana | logger=plugin.installer t=2025-06-14T23:12:54.870762996Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-14T23:12:54.93397751Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-14T23:12:54.96141684Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:54.961451771Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=496.802722ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:54.961474922Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.125692634Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.126927623Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.129090222Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.129854596Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.131129107Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.145801254Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.146733713Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.147559739Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-14T23:12:55.148410217Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-14T23:12:55.218689594Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-14T23:12:55.307812861Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-14T23:12:55.46426403Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-14T23:12:55.514935793Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:55.514982614Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=553.502282ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:55.515018665Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-14T23:12:55.675663158Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-14T23:12:55.745081118Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=provisioning.dashboard t=2025-06-14T23:12:55.759277509Z level=info msg="finished to provision dashboards" grafana | logger=plugins.registration t=2025-06-14T23:12:55.763433251Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:55.763474112Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=248.448746ms grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:55.763506763Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-14T23:12:55.967382672Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-14T23:12:56.023318903Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-14T23:12:56.039679764Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-14T23:12:56.039700774Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=276.18248ms grafana | logger=infra.usagestats t=2025-06-14T23:14:31.4665516Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-14 23:12:55,638] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,638] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,639] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,642] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,645] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-14 23:12:55,650] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-14 23:12:55,657] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:55,681] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:55,682] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:55,691] INFO Socket connection established, initiating session, client: /172.17.0.9:37842, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:55,722] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000278b80000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:55,852] INFO Session: 0x100000278b80000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:55,852] INFO EventThread shut down for session: 0x100000278b80000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-14 23:12:56,529] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-14 23:12:56,832] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-14 23:12:56,925] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-14 23:12:56,926] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-14 23:12:56,927] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-14 23:12:56,940] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 23:12:56,944] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,944] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,944] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,945] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,947] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-14 23:12:56,951] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-14 23:12:56,957] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:56,959] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 23:12:56,962] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:56,970] INFO Socket connection established, initiating session, client: /172.17.0.9:37844, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:56,980] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000278b80001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-14 23:12:56,984] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-14 23:12:57,294] INFO Cluster ID = 9ZY3bO1PSZuNse_HC2BO3A (kafka.server.KafkaServer) kafka | [2025-06-14 23:12:57,298] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-14 23:12:57,347] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-14 23:12:57,381] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 23:12:57,382] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 23:12:57,382] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 23:12:57,388] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-14 23:12:57,423] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-14 23:12:57,427] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-14 23:12:57,439] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-14 23:12:57,439] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-14 23:12:57,442] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-14 23:12:57,452] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-14 23:12:57,499] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-14 23:12:57,520] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-14 23:12:57,532] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-14 23:12:57,573] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 23:12:57,914] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-14 23:12:57,918] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-14 23:12:57,946] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-14 23:12:57,947] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-14 23:12:57,947] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-14 23:12:57,952] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-14 23:12:57,957] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 23:12:57,978] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:57,980] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:57,984] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:57,985] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:58,004] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-14 23:12:58,032] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-14 23:12:58,062] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749942778048,1749942778048,1,0,0,72057604653187073,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-14 23:12:58,063] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-14 23:12:58,125] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-14 23:12:58,133] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:58,144] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:58,144] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:58,152] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-14 23:12:58,163] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,164] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:12:58,167] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,171] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:12:58,173] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-14 23:12:58,194] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-14 23:12:58,199] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-14 23:12:58,199] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-14 23:12:58,211] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-14 23:12:58,211] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,218] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,221] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,225] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,232] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-14 23:12:58,243] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,248] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,253] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-14 23:12:58,267] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-14 23:12:58,272] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-14 23:12:58,274] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,275] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,275] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,275] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,279] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,279] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,279] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,280] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-14 23:12:58,280] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-14 23:12:58,282] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,290] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-14 23:12:58,295] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 23:12:58,296] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 23:12:58,296] INFO Kafka startTimeMs: 1749942778288 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-14 23:12:58,297] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-14 23:12:58,303] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 23:12:58,304] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 23:12:58,310] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 23:12:58,310] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-14 23:12:58,311] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 23:12:58,311] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 23:12:58,313] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-14 23:12:58,314] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-14 23:12:58,314] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,320] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,321] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,321] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,322] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,323] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,337] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-14 23:12:58,383] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 23:12:58,394] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 23:12:58,461] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-14 23:13:03,340] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:03,340] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:25,976] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:26,009] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-14 23:13:26,013] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-14 23:13:26,031] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:26,067] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(nzMfrGErST29J2g8u3Bcpg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(mNKmJ1yIRN-J8gvpkMlI4A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:26,068] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-14 23:13:26,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,074] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,075] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,075] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,076] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,077] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,078] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-14 23:13:26,079] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,091] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,091] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,091] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,091] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,091] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,092] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,093] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,094] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-14 23:13:26,094] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,287] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,288] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,289] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-14 23:13:26,291] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 23:13:26,292] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 23:13:26,293] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 23:13:26,294] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-14 23:13:26,296] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,299] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,301] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-14 23:13:26,302] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,307] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,308] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,309] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 23:13:26,354] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 23:13:26,355] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 23:13:26,356] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-14 23:13:26,357] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-14 23:13:26,435] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,450] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,451] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,452] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,453] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,472] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,473] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,473] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,473] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,473] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,487] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,488] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,488] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,489] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,489] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,497] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,498] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,498] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,498] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,498] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,509] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,510] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,510] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,510] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,510] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,524] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,525] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,525] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,525] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,525] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,543] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,545] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,545] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,545] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,546] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,556] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,557] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,557] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,558] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,558] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,571] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,572] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,573] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,573] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,573] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,580] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,581] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,581] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,581] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,581] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,590] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,591] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,591] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,591] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,591] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,599] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,600] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,600] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,600] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,600] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,609] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,611] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,611] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,611] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,611] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,618] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,619] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,619] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,619] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,619] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,631] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,632] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,632] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,632] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,632] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,638] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,639] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,639] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,639] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,639] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,645] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,646] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,646] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,646] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,646] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,655] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,656] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,656] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,656] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,656] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,665] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,666] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,666] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,666] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,666] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,678] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,679] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,679] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,679] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,679] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,690] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,691] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,691] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,691] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,692] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,713] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,714] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,714] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,714] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,714] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,722] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,722] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,722] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,722] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,723] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,731] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,731] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,731] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,732] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,732] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,739] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,740] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,740] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,740] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,740] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,749] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,750] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,750] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,750] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,750] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,760] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,761] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,761] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,762] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,762] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,771] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,772] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,772] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,773] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,773] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,784] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,786] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,786] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,786] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,787] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,794] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,795] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,795] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,795] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,795] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,804] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,805] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,805] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,805] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,805] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,812] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,813] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,813] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,813] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,813] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,820] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,821] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,821] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,821] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,821] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,829] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,830] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,830] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,830] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,830] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,838] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,839] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,839] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,839] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,839] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(nzMfrGErST29J2g8u3Bcpg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,847] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,848] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,848] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,848] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,848] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,860] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,862] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,863] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,863] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,863] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,881] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,882] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,882] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,882] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,883] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,891] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,892] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,892] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,892] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,892] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,902] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,903] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,903] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,903] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,903] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,913] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,914] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,914] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,914] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,914] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,922] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,923] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,923] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,923] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,923] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,931] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,931] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,931] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,931] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,931] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,941] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,942] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,942] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,942] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,943] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,956] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,958] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,958] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,958] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,958] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,965] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,966] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,966] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,966] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,967] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,972] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,973] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,973] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,973] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,973] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,979] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,980] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,980] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,980] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,980] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:26,993] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:26,994] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:26,994] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,994] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:26,994] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:27,000] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:27,001] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:27,001] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:27,001] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:27,001] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:27,009] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-14 23:13:27,009] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-14 23:13:27,009] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:27,009] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-14 23:13:27,010] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(mNKmJ1yIRN-J8gvpkMlI4A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-14 23:13:27,015] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-14 23:13:27,016] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-14 23:13:27,020] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,022] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,023] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,023] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,024] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,024] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,026] INFO [Broker id=1] Finished LeaderAndIsr request in 721ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-14 23:13:27,031] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=mNKmJ1yIRN-J8gvpkMlI4A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=nzMfrGErST29J2g8u3Bcpg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 23:13:27,034] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,036] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,036] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,036] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,036] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,036] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,037] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,039] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,039] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,039] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,039] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,043] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,044] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,044] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,044] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,044] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,045] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,045] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,045] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,046] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,046] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,046] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,046] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 24 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,048] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 25 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,049] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,049] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,049] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,049] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-14 23:13:27,050] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 26 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,050] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,050] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,050] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,050] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,051] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,051] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,051] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,051] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,052] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 28 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,052] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,052] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-14 23:13:27,053] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-14 23:13:27,524] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 in Empty state. Created a new member id consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,543] INFO [GroupCoordinator 1]: Preparing to rebalance group 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 in state PreparingRebalance with old generation 0 (__consumer_offsets-7) (reason: Adding new member consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a with group instance id None; client reason: need to re-join with the given member-id: consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 3803e88f-f5f0-4f29-85ef-f570c18454fb in Empty state. Created a new member id consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,599] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,610] INFO [GroupCoordinator 1]: Preparing to rebalance group 3803e88f-f5f0-4f29-85ef-f570c18454fb in state PreparingRebalance with old generation 0 (__consumer_offsets-36) (reason: Adding new member consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360 with group instance id None; client reason: need to re-join with the given member-id: consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:27,611] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,556] INFO [GroupCoordinator 1]: Stabilized group 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 generation 1 (__consumer_offsets-7) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,579] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a for group 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,612] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,614] INFO [GroupCoordinator 1]: Stabilized group 3803e88f-f5f0-4f29-85ef-f570c18454fb generation 1 (__consumer_offsets-36) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,630] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-14 23:13:30,638] INFO [GroupCoordinator 1]: Assignment received from leader consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360 for group 3803e88f-f5f0-4f29-85ef-f570c18454fb for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-14T23:13:26.497+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-14T23:13:26.680+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-14T23:13:26.727+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-14T23:13:26.906+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-14T23:13:26.907+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-14T23:13:26.907+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942806904 policy-apex-pdp | [2025-06-14T23:13:26.910+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-1, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-14T23:13:26.933+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-14T23:13:26.934+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-14T23:13:26.936+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-14T23:13:26.958+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5 policy-apex-pdp | group.instance.id = null policy-apex-pdp | group.protocol = classic policy-apex-pdp | group.remote.assignor = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-14T23:13:26.959+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-14T23:13:26.976+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-14T23:13:26.976+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-14T23:13:26.977+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942806976 policy-apex-pdp | [2025-06-14T23:13:26.977+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-14T23:13:26.979+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de166945-7f22-4d26-9f2e-25c838f74d44, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-14T23:13:26.992+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.gzip.level = -1 policy-apex-pdp | compression.lz4.level = 9 policy-apex-pdp | compression.type = none policy-apex-pdp | compression.zstd.level = 3 policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | enable.metrics.push = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metadata.recovery.strategy = none policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.max.ms = 1000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.header.urlencode = false policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-14T23:13:26.994+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-apex-pdp | [2025-06-14T23:13:27.016+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-14T23:13:27.056+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-apex-pdp | [2025-06-14T23:13:27.056+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-apex-pdp | [2025-06-14T23:13:27.056+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942807056 policy-apex-pdp | [2025-06-14T23:13:27.056+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de166945-7f22-4d26-9f2e-25c838f74d44, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-14T23:13:27.057+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-14T23:13:27.057+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-14T23:13:27.059+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-14T23:13:27.059+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-14T23:13:27.063+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-14T23:13:27.089+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-14T23:13:27.091+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"95fc3f5d-6568-4e8f-97ab-f0fa038e126f","timestampMs":1749942807065,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-14T23:13:27.345+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-14T23:13:27.346+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-14T23:13:27.346+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-14T23:13:27.347+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-14T23:13:27.358+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-14T23:13:27.358+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-14T23:13:27.359+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-14T23:13:27.358+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-apex-pdp | [2025-06-14T23:13:27.483+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-apex-pdp | [2025-06-14T23:13:27.484+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-apex-pdp | [2025-06-14T23:13:27.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-14T23:13:27.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] (Re-)joining group policy-apex-pdp | [2025-06-14T23:13:27.497+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-14T23:13:27.533+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Request joining group due to: need to re-join with the given member-id: consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a policy-apex-pdp | [2025-06-14T23:13:27.534+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] (Re-)joining group policy-apex-pdp | [2025-06-14T23:13:28.032+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-14T23:13:28.032+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-14T23:13:30.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a', protocol='range'} policy-apex-pdp | [2025-06-14T23:13:30.567+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Finished assignment for group at generation 1: {consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-14T23:13:30.594+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2-c54bd97f-5dd2-4f24-824a-9cf00d82dc7a', protocol='range'} policy-apex-pdp | [2025-06-14T23:13:30.594+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-14T23:13:30.596+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-14T23:13:30.627+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-14T23:13:30.675+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5-2, groupId=6a33a5c2-dba5-4e5e-ad6b-75bc5652eaa5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-14T23:13:47.064+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6858f4db-4239-446f-8164-f22d4e30a37c","timestampMs":1749942827064,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-14T23:13:47.088+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6858f4db-4239-446f-8164-f22d4e30a37c","timestampMs":1749942827064,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-14T23:13:47.092+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:13:47.274+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","timestampMs":1749942827195,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.288+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-14T23:13:47.288+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89bdf484-1903-4a17-b9b5-c7cec63015ae","timestampMs":1749942827288,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-14T23:13:47.291+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7dabac4a-62b8-4ed8-b162-4cd0e48c9e53","timestampMs":1749942827291,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.309+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89bdf484-1903-4a17-b9b5-c7cec63015ae","timestampMs":1749942827288,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-14T23:13:47.309+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:13:47.316+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7dabac4a-62b8-4ed8-b162-4cd0e48c9e53","timestampMs":1749942827291,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.316+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:13:47.330+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3bf84127-8a5e-465a-8114-8042d59e9fb8","timestampMs":1749942827196,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.333+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3bf84127-8a5e-465a-8114-8042d59e9fb8","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d336d489-20d4-4023-b0bc-dbf2157bead6","timestampMs":1749942827332,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.341+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3bf84127-8a5e-465a-8114-8042d59e9fb8","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d336d489-20d4-4023-b0bc-dbf2157bead6","timestampMs":1749942827332,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.342+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:13:47.395+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","timestampMs":1749942827347,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.397+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f5bb062d-1fe0-4ab3-ba77-fe506268dd71","timestampMs":1749942827397,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.409+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f5bb062d-1fe0-4ab3-ba77-fe506268dd71","timestampMs":1749942827397,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:13:47.409+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:13:53.588+00:00|INFO|RequestLog|qtp1089680530-32] 172.17.0.1 - - [14/Jun/2025:23:13:53 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-apex-pdp | [2025-06-14T23:13:56.122+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.3 - policyadmin [14/Jun/2025:23:13:56 +0000] "GET /metrics HTTP/1.1" 200 2048 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-14T23:14:13.650+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [14/Jun/2025:23:14:13 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" policy-apex-pdp | [2025-06-14T23:14:56.079+00:00|INFO|RequestLog|qtp1089680530-26] 172.17.0.3 - policyadmin [14/Jun/2025:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 2063 "" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-14T23:15:47.287+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"ed8915a4-34f7-46b6-87ab-23654d431c74","timestampMs":1749942947287,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:15:47.301+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"ed8915a4-34f7-46b6-87ab-23654d431c74","timestampMs":1749942947287,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-14T23:15:47.301+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-14T23:15:56.084+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.3 - policyadmin [14/Jun/2025:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 2067 "" "Prometheus/3.4.1" policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-14T23:13:03.817+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-14T23:13:03.887+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 33 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-14T23:13:03.888+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-14T23:13:05.286+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-14T23:13:05.455+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 158 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-14T23:13:06.128+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-14T23:13:06.142+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-14T23:13:06.144+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-14T23:13:06.144+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-14T23:13:06.186+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-14T23:13:06.187+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2239 ms policy-api | [2025-06-14T23:13:06.536+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-14T23:13:06.622+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-14T23:13:06.672+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-14T23:13:07.079+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-14T23:13:07.121+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-14T23:13:07.347+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@51dbed72 policy-api | [2025-06-14T23:13:07.349+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-14T23:13:07.433+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-14T23:13:09.581+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-14T23:13:09.585+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-14T23:13:10.251+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-14T23:13:11.149+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-14T23:13:12.236+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-14T23:13:12.287+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-14T23:13:13.116+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-14T23:13:13.279+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-14T23:13:13.316+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-14T23:13:13.343+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.2 seconds (process running for 10.785) policy-api | [2025-06-14T23:13:39.918+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-14T23:13:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-14T23:13:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 1 ms policy-api | [2025-06-14T23:15:00.890+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:50.922124 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:50.977313 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.046817 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.102724 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.154496 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.218252 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.27878 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.332781 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.391139 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.442997 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.498503 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.563835 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.616146 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.678154 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.731913 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.793689 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.847385 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.904954 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:51.961995 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.018862 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.077541 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.133348 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.179828 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.247959 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.286549 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.33591 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.388358 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.447263 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.508029 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.558293 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.618624 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.66723 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.736936 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.804201 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.865939 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.926193 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:52.988229 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.048152 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.106883 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.162585 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.223483 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.282271 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.341117 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.393998 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.46781 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.522606 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.578401 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.637608 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.690727 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.744225 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.813002 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.867795 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.929281 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:53.996155 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.062376 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.126963 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.187813 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.244787 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.335982 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.392756 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.451345 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.512823 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.566218 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.64168 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.697173 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.746997 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.809085 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.871903 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.931449 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:54.98059 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.03947 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.099374 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.163928 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.218052 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.269932 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.323234 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.371814 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.424848 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.485115 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.542297 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.596141 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.6625 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.725785 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.78128 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.832916 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.881255 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.932909 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:55.976095 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.027711 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.0753 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.121685 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.173682 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.224354 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.274498 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.322495 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1406252312500800u | 1 | 2025-06-14 23:12:56.374279 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.430786 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.481618 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.546019 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.59993 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.65801 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.722445 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.769749 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.820737 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.875308 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.931165 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:56.986267 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:57.041198 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1406252312500900u | 1 | 2025-06-14 23:12:57.094408 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.155179 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.213894 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.262445 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.3235 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.376142 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.433282 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.494598 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.554753 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1406252312501000u | 1 | 2025-06-14 23:12:57.609133 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1406252312501100u | 1 | 2025-06-14 23:12:57.657346 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1406252312501200u | 1 | 2025-06-14 23:12:57.726885 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1406252312501200u | 1 | 2025-06-14 23:12:57.793651 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1406252312501200u | 1 | 2025-06-14 23:12:57.848586 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1406252312501200u | 1 | 2025-06-14 23:12:57.901629 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1406252312501300u | 1 | 2025-06-14 23:12:57.948017 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1406252312501300u | 1 | 2025-06-14 23:12:58.002859 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1406252312501300u | 1 | 2025-06-14 23:12:58.062674 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:58.759109 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:58.810669 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:58.866404 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:58.931193 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:58.990061 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.044051 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.097735 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.14772 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.200538 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.263627 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.315985 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.368813 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1406252312581400u | 1 | 2025-06-14 23:12:59.440936 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.491059 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.532939 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.584349 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.633966 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.684258 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.738127 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.78404 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1406252312581500u | 1 | 2025-06-14 23:12:59.833411 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1406252312581600u | 1 | 2025-06-14 23:12:59.881719 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1406252312581600u | 1 | 2025-06-14 23:12:59.931104 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1406252312581601u | 1 | 2025-06-14 23:12:59.997665 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1406252312581601u | 1 | 2025-06-14 23:13:00.045624 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1406252312581700u | 1 | 2025-06-14 23:13:00.101815 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1406252312581700u | 1 | 2025-06-14 23:13:00.161272 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1406252312581700u | 1 | 2025-06-14 23:13:00.215603 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.271279 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.324202 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.37484 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.425443 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.478797 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.533188 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.586514 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.634545 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1406252312581701u | 1 | 2025-06-14 23:13:00.693303 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1406252313011600u | 1 | 2025-06-14 23:13:01.355963 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1406252313011600u | 1 | 2025-06-14 23:13:02.015772 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1406252313011600u | 1 | 2025-06-14 23:13:02.083766 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.9:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-14T23:13:15.551+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 52 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-14T23:13:15.553+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-14T23:13:17.056+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-14T23:13:17.153+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-14T23:13:18.250+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-14T23:13:18.266+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-14T23:13:18.271+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-14T23:13:18.271+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-14T23:13:18.351+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-14T23:13:18.352+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2736 ms policy-pap | [2025-06-14T23:13:18.852+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-14T23:13:18.940+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-14T23:13:18.995+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-14T23:13:19.433+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-14T23:13:19.480+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-14T23:13:19.731+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@4769378c policy-pap | [2025-06-14T23:13:19.734+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-14T23:13:19.840+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-14T23:13:21.982+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-14T23:13:21.986+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-14T23:13:23.304+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 3803e88f-f5f0-4f29-85ef-f570c18454fb policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T23:13:23.390+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:23.548+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:23.548+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:23.548+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942803546 policy-pap | [2025-06-14T23:13:23.551+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-1, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T23:13:23.552+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T23:13:23.552+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:23.560+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:23.560+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:23.560+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942803560 policy-pap | [2025-06-14T23:13:23.560+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T23:13:23.947+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-14T23:13:24.072+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-14T23:13:24.150+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-14T23:13:24.399+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-14T23:13:25.191+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-14T23:13:25.306+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-14T23:13:25.323+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-14T23:13:25.343+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-14T23:13:25.343+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-14T23:13:25.344+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-14T23:13:25.345+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-14T23:13:25.345+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-14T23:13:25.345+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-14T23:13:25.345+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-14T23:13:25.347+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3803e88f-f5f0-4f29-85ef-f570c18454fb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5919dd67 policy-pap | [2025-06-14T23:13:25.355+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3803e88f-f5f0-4f29-85ef-f570c18454fb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T23:13:25.356+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 3803e88f-f5f0-4f29-85ef-f570c18454fb policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T23:13:25.356+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:25.363+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:25.363+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:25.363+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942805363 policy-pap | [2025-06-14T23:13:25.363+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T23:13:25.364+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-14T23:13:25.364+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=34d08da0-2118-45eb-9b0c-5c14a22b1505, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@dc46916 policy-pap | [2025-06-14T23:13:25.364+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=34d08da0-2118-45eb-9b0c-5c14a22b1505, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T23:13:25.364+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-14T23:13:25.365+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942805370 policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-14T23:13:25.370+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=34d08da0-2118-45eb-9b0c-5c14a22b1505, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T23:13:25.371+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3803e88f-f5f0-4f29-85ef-f570c18454fb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-14T23:13:25.371+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3fbcd648-b9bb-408c-b536-5883796c5479, alive=false, publisher=null]]: starting policy-pap | [2025-06-14T23:13:25.382+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-14T23:13:25.383+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:25.395+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-14T23:13:25.411+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:25.411+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:25.411+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942805411 policy-pap | [2025-06-14T23:13:25.412+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3fbcd648-b9bb-408c-b536-5883796c5479, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-14T23:13:25.412+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1c5d7398-825e-4f19-82d0-3220b42d932b, alive=false, publisher=null]]: starting policy-pap | [2025-06-14T23:13:25.412+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-14T23:13:25.412+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-14T23:13:25.413+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-14T23:13:25.416+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-14T23:13:25.416+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-14T23:13:25.417+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749942805416 policy-pap | [2025-06-14T23:13:25.417+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1c5d7398-825e-4f19-82d0-3220b42d932b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-14T23:13:25.417+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-14T23:13:25.417+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-14T23:13:25.418+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-14T23:13:25.418+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-14T23:13:25.420+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-14T23:13:25.420+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-14T23:13:25.421+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-14T23:13:25.421+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-14T23:13:25.422+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-14T23:13:25.422+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-14T23:13:25.423+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-14T23:13:25.424+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.694 seconds (process running for 11.256) policy-pap | [2025-06-14T23:13:25.957+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-pap | [2025-06-14T23:13:25.958+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-14T23:13:25.958+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-pap | [2025-06-14T23:13:25.958+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-pap | [2025-06-14T23:13:26.038+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-14T23:13:26.044+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-14T23:13:26.052+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:26.052+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Cluster ID: 9ZY3bO1PSZuNse_HC2BO3A policy-pap | [2025-06-14T23:13:26.183+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-14T23:13:26.210+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:26.388+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:26.427+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:26.861+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:26.882+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-14T23:13:27.564+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-14T23:13:27.570+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] (Re-)joining group policy-pap | [2025-06-14T23:13:27.591+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-14T23:13:27.595+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-14T23:13:27.596+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Request joining group due to: need to re-join with the given member-id: consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360 policy-pap | [2025-06-14T23:13:27.597+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] (Re-)joining group policy-pap | [2025-06-14T23:13:27.601+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e policy-pap | [2025-06-14T23:13:27.601+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-14T23:13:30.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e', protocol='range'} policy-pap | [2025-06-14T23:13:30.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Successfully joined group with generation Generation{generationId=1, memberId='consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360', protocol='range'} policy-pap | [2025-06-14T23:13:30.623+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-14T23:13:30.624+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Finished assignment for group at generation 1: {consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-14T23:13:30.648+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-fe112819-880f-449f-a57d-c4833b9b241e', protocol='range'} policy-pap | [2025-06-14T23:13:30.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-14T23:13:30.654+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-14T23:13:30.661+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Successfully synced group in generation Generation{generationId=1, memberId='consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3-437280fa-5196-4e5c-9649-ee448e3b1360', protocol='range'} policy-pap | [2025-06-14T23:13:30.661+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-14T23:13:30.662+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-14T23:13:30.668+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-14T23:13:30.673+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-14T23:13:30.679+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3803e88f-f5f0-4f29-85ef-f570c18454fb-3, groupId=3803e88f-f5f0-4f29-85ef-f570c18454fb] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-14T23:13:30.679+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-14T23:13:41.613+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-14T23:13:41.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-14T23:13:41.617+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 4 ms policy-pap | [2025-06-14T23:13:47.118+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-14T23:13:47.119+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6858f4db-4239-446f-8164-f22d4e30a37c","timestampMs":1749942827064,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-pap | [2025-06-14T23:13:47.121+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6858f4db-4239-446f-8164-f22d4e30a37c","timestampMs":1749942827064,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-pap | [2025-06-14T23:13:47.128+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-14T23:13:47.209+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting policy-pap | [2025-06-14T23:13:47.209+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting listener policy-pap | [2025-06-14T23:13:47.211+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting timer policy-pap | [2025-06-14T23:13:47.211+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c3373ab8-1bd6-4970-9c3d-c7ee4fb40596, expireMs=1749942857211] policy-pap | [2025-06-14T23:13:47.214+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting enqueue policy-pap | [2025-06-14T23:13:47.214+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate started policy-pap | [2025-06-14T23:13:47.215+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=c3373ab8-1bd6-4970-9c3d-c7ee4fb40596, expireMs=1749942857211] policy-pap | [2025-06-14T23:13:47.222+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","timestampMs":1749942827195,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.275+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","timestampMs":1749942827195,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.276+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","timestampMs":1749942827195,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.276+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-14T23:13:47.276+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-14T23:13:47.304+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89bdf484-1903-4a17-b9b5-c7cec63015ae","timestampMs":1749942827288,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-pap | [2025-06-14T23:13:47.306+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-14T23:13:47.306+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7dabac4a-62b8-4ed8-b162-4cd0e48c9e53","timestampMs":1749942827291,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.307+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping policy-pap | [2025-06-14T23:13:47.307+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"89bdf484-1903-4a17-b9b5-c7cec63015ae","timestampMs":1749942827288,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup"} policy-pap | [2025-06-14T23:13:47.308+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping enqueue policy-pap | [2025-06-14T23:13:47.308+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping timer policy-pap | [2025-06-14T23:13:47.308+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c3373ab8-1bd6-4970-9c3d-c7ee4fb40596, expireMs=1749942857211] policy-pap | [2025-06-14T23:13:47.308+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping listener policy-pap | [2025-06-14T23:13:47.308+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopped policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate successful policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f start publishing next request policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange starting policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange starting listener policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange starting timer policy-pap | [2025-06-14T23:13:47.318+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=3bf84127-8a5e-465a-8114-8042d59e9fb8, expireMs=1749942857318] policy-pap | [2025-06-14T23:13:47.319+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange starting enqueue policy-pap | [2025-06-14T23:13:47.319+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange started policy-pap | [2025-06-14T23:13:47.319+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=3bf84127-8a5e-465a-8114-8042d59e9fb8, expireMs=1749942857318] policy-pap | [2025-06-14T23:13:47.320+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3bf84127-8a5e-465a-8114-8042d59e9fb8","timestampMs":1749942827196,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.359+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3bf84127-8a5e-465a-8114-8042d59e9fb8","timestampMs":1749942827196,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.361+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-14T23:13:47.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3bf84127-8a5e-465a-8114-8042d59e9fb8","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d336d489-20d4-4023-b0bc-dbf2157bead6","timestampMs":1749942827332,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.382+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c3373ab8-1bd6-4970-9c3d-c7ee4fb40596","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7dabac4a-62b8-4ed8-b162-4cd0e48c9e53","timestampMs":1749942827291,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange stopping policy-pap | [2025-06-14T23:13:47.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange stopping enqueue policy-pap | [2025-06-14T23:13:47.382+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c3373ab8-1bd6-4970-9c3d-c7ee4fb40596 policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange stopping timer policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=3bf84127-8a5e-465a-8114-8042d59e9fb8, expireMs=1749942857318] policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange stopping listener policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange stopped policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpStateChange successful policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f start publishing next request policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting policy-pap | [2025-06-14T23:13:47.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting listener policy-pap | [2025-06-14T23:13:47.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting timer policy-pap | [2025-06-14T23:13:47.384+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=6207f219-4860-4bce-84b1-d2ecd1bb78e2, expireMs=1749942857384] policy-pap | [2025-06-14T23:13:47.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate starting enqueue policy-pap | [2025-06-14T23:13:47.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate started policy-pap | [2025-06-14T23:13:47.384+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","timestampMs":1749942827347,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.392+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3bf84127-8a5e-465a-8114-8042d59e9fb8","timestampMs":1749942827196,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.393+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-14T23:13:47.398+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","timestampMs":1749942827347,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.398+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-14T23:13:47.402+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3bf84127-8a5e-465a-8114-8042d59e9fb8","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d336d489-20d4-4023-b0bc-dbf2157bead6","timestampMs":1749942827332,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.405+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3bf84127-8a5e-465a-8114-8042d59e9fb8 policy-pap | [2025-06-14T23:13:47.408+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f5bb062d-1fe0-4ab3-ba77-fe506268dd71","timestampMs":1749942827397,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-1674c2ff-d851-4ba0-8f9a-b19a5719d991","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","timestampMs":1749942827347,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping enqueue policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping timer policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6207f219-4860-4bce-84b1-d2ecd1bb78e2, expireMs=1749942857384] policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopping listener policy-pap | [2025-06-14T23:13:47.410+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate stopped policy-pap | [2025-06-14T23:13:47.413+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6207f219-4860-4bce-84b1-d2ecd1bb78e2","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f5bb062d-1fe0-4ab3-ba77-fe506268dd71","timestampMs":1749942827397,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:13:47.414+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6207f219-4860-4bce-84b1-d2ecd1bb78e2 policy-pap | [2025-06-14T23:13:47.417+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f PdpUpdate successful policy-pap | [2025-06-14T23:13:47.417+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f has no more requests policy-pap | [2025-06-14T23:14:17.212+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c3373ab8-1bd6-4970-9c3d-c7ee4fb40596, expireMs=1749942857211] policy-pap | [2025-06-14T23:14:17.318+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=3bf84127-8a5e-465a-8114-8042d59e9fb8, expireMs=1749942857318] policy-pap | [2025-06-14T23:15:23.059+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-14T23:15:23.067+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-14T23:15:23.451+00:00|INFO|SessionData|http-nio-6969-exec-9] unknown group testGroup policy-pap | [2025-06-14T23:15:24.145+00:00|INFO|SessionData|http-nio-6969-exec-9] create cached group testGroup policy-pap | [2025-06-14T23:15:24.145+00:00|INFO|SessionData|http-nio-6969-exec-9] creating DB group testGroup policy-pap | [2025-06-14T23:15:24.698+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2025-06-14T23:15:25.032+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-14T23:15:25.133+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-14T23:15:25.133+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup policy-pap | [2025-06-14T23:15:25.134+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup policy-pap | [2025-06-14T23:15:25.148+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-14T23:15:25Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-14T23:15:25Z, user=policyadmin)] policy-pap | [2025-06-14T23:15:25.423+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-14T23:15:25.887+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2025-06-14T23:15:25.888+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-14T23:15:25.888+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-14T23:15:25.888+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2025-06-14T23:15:25.888+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2025-06-14T23:15:25.899+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-14T23:15:25Z, user=policyadmin)] policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group defaultGroup policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup policy-pap | [2025-06-14T23:15:26.327+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup policy-pap | [2025-06-14T23:15:26.339+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-14T23:15:26Z, user=policyadmin)] policy-pap | [2025-06-14T23:15:26.905+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2025-06-14T23:15:26.908+00:00|INFO|SessionData|http-nio-6969-exec-3] deleting DB group testGroup policy-pap | [2025-06-14T23:15:47.301+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"ed8915a4-34f7-46b6-87ab-23654d431c74","timestampMs":1749942947287,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:15:47.301+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"ed8915a4-34f7-46b6-87ab-23654d431c74","timestampMs":1749942947287,"name":"apex-b7d6b4b5-331f-490f-a18c-61bfd0ff2f3f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-14T23:15:47.302+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-14 23:12:47.841 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-14 23:12:47.843 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-14 23:12:47.857 UTC [51] LOG: database system was shut down at 2025-06-14 23:12:47 UTC postgres | 2025-06-14 23:12:47.863 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-14 23:12:49.374 UTC [48] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-14 23:12:49.377 UTC [48] LOG: aborting any active transactions postgres | 2025-06-14 23:12:49.379 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-14 23:12:49.380 UTC [49] LOG: shutting down postgres | 2025-06-14 23:12:49.382 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-14 23:12:50.107 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.614 s, sync=0.103 s, total=0.728 s; sync files=1788, longest=0.004 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-14 23:12:50.121 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-14 23:12:50.201 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-14 23:12:50.201 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-14 23:12:50.201 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-14 23:12:50.206 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-14 23:12:50.213 UTC [101] LOG: database system was shut down at 2025-06-14 23:12:50 UTC postgres | 2025-06-14 23:12:50.220 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-14T23:12:45.742Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-14T23:12:45.742Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-14T23:12:45.742Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-14T23:12:45.744Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-14T23:12:45.748Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-14T23:12:45.748Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-14T23:12:45.752Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-14T23:12:45.752Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-14T23:12:45.756Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-14T23:12:45.756Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.8µs prometheus | time=2025-06-14T23:12:45.756Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-14T23:12:45.757Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=683.452µs prometheus | time=2025-06-14T23:12:45.757Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=118.734µs wal_replay_duration=747.214µs wbl_replay_duration=240ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.8µs total_replay_duration=1.182598ms prometheus | time=2025-06-14T23:12:45.760Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-14T23:12:45.760Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-14T23:12:45.760Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-14T23:12:45.761Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-14T23:12:45.761Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.24µs remote_storage=2.04µs web_handler=720ns query_engine=1.85µs scrape=250.478µs scrape_sd=193.216µs notify=126.414µs notify_sd=18.071µs rules=2.02µs tracing=4.15µs filename=/etc/prometheus/prometheus.yml totalDuration=1.198657ms prometheus | time=2025-06-14T23:12:45.761Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-14T23:12:45.761Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-14 23:12:48,240 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-14 23:12:48,311 INFO org.onap.policy.models.simulators starting simulator | 2025-06-14 23:12:48,312 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-14 23:12:48,588 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-14 23:12:48,590 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-14 23:12:48,868 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-14 23:12:48,880 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-14 23:12:48,883 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-14 23:12:48,889 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-14 23:12:48,940 INFO Session workerName=node0 simulator | 2025-06-14 23:12:48,956 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-14 23:12:49,532 INFO Using GSON for REST calls simulator | 2025-06-14 23:12:49,596 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} simulator | 2025-06-14 23:12:49,603 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-14 23:12:49,604 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @1882ms simulator | 2025-06-14 23:12:49,604 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4279 ms. simulator | 2025-06-14 23:12:49,618 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-14 23:12:49,626 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-14 23:12:49,627 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-14 23:12:49,628 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-14 23:12:49,630 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-14 23:12:49,637 INFO Session workerName=node0 simulator | 2025-06-14 23:12:49,638 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-14 23:12:49,714 INFO Using GSON for REST calls simulator | 2025-06-14 23:12:49,725 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} simulator | 2025-06-14 23:12:49,727 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-14 23:12:49,728 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @2006ms simulator | 2025-06-14 23:12:49,729 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4901 ms. simulator | 2025-06-14 23:12:49,730 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-14 23:12:49,735 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START simulator | 2025-06-14 23:12:49,735 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING simulator | 2025-06-14 23:12:49,737 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN simulator | 2025-06-14 23:12:49,739 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 simulator | 2025-06-14 23:12:49,749 INFO Session workerName=node0 simulator | 2025-06-14 23:12:49,751 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-14 23:12:49,812 INFO Using GSON for REST calls simulator | 2025-06-14 23:12:49,824 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} simulator | 2025-06-14 23:12:49,826 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-14 23:12:49,827 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @2104ms simulator | 2025-06-14 23:12:49,827 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4910 ms. simulator | 2025-06-14 23:12:49,829 INFO org.onap.policy.models.simulators started zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-14 23:12:52,381] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,383] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,384] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,384] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,384] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,386] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 23:12:52,386] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 23:12:52,386] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-14 23:12:52,386] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-14 23:12:52,388] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-14 23:12:52,389] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,389] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,389] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,389] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,389] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-14 23:12:52,390] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-14 23:12:52,401] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-14 23:12:52,403] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-14 23:12:52,403] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-14 23:12:52,406] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 23:12:52,414] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,414] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,416] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,417] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,418] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,419] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-14 23:12:52,420] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,420] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,421] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-14 23:12:52,421] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-14 23:12:52,422] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,422] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,422] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,422] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,423] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,423] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-14 23:12:52,425] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,425] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,425] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-14 23:12:52,425] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-14 23:12:52,425] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,449] INFO Logging initialized @450ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-14 23:12:52,559] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 23:12:52,559] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 23:12:52,588] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-14 23:12:52,635] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 23:12:52,636] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 23:12:52,637] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-14 23:12:52,644] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-14 23:12:52,655] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-14 23:12:52,666] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-14 23:12:52,667] INFO Started @672ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-14 23:12:52,667] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-14 23:12:52,671] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-14 23:12:52,672] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-14 23:12:52,673] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-14 23:12:52,674] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-14 23:12:52,693] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-14 23:12:52,693] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-14 23:12:52,694] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 23:12:52,694] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 23:12:52,699] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-14 23:12:52,699] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 23:12:52,703] INFO Snapshot loaded in 10 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-14 23:12:52,704] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-14 23:12:52,704] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-14 23:12:52,712] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-14 23:12:52,715] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-14 23:12:52,734] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-14 23:12:52,735] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-14 23:12:55,707] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-apex-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2066 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14427864491876740852.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12556947585358194818.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11274574820759100951.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KlSg from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-KlSg/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15393821766395064304.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config976341320375837593tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16968760094464931228.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4491396409375499241.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KlSg from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-KlSg/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13085680391840819440.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14827741284529821146.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KlSg from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-KlSg/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11313353866053108814.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KlSg from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-KlSg/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/2102 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21290 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 16G 140G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 880 23269 0 8016 30831 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:40:d3:a9 brd ff:ff:ff:ff:ff:ff inet 10.30.107.68/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85951sec preferred_lft 85951sec inet6 fe80::f816:3eff:fe40:d3a9/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:17:9c:48:74 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:17ff:fe9c:4874/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21290) 06/14/25 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 162.07 23.13 138.94 2315.61 59459.96 23:13:01 680.62 5.35 675.27 429.32 249783.01 23:14:01 50.52 0.12 50.41 12.26 41709.32 23:15:01 221.21 0.37 220.85 37.99 33823.83 23:16:01 7.20 0.02 7.18 2.40 192.37 23:17:01 35.98 0.13 35.84 13.06 591.63 Average: 192.95 4.85 188.10 468.44 64265.17 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 27938076 31596812 5001144 15.18 90888 3833964 1990100 5.86 1001112 3612736 2059660 23:13:01 23528056 30883004 9411164 28.57 164236 7252764 6979524 20.54 1909656 6832696 49036 23:14:01 22242848 29699276 10696372 32.47 165784 7354164 8466444 24.91 3183848 6826592 252 23:15:01 21540684 29537256 11398536 34.60 206468 7800076 8794480 25.88 3435852 7215136 1568 23:16:01 21540124 29538424 11399096 34.61 206572 7801812 8852356 26.05 3440008 7211884 120 23:17:01 23223904 31068712 9715316 29.49 207136 7662744 2991940 8.80 1931100 7091292 68 Average: 23335615 30387247 9603605 29.16 173514 6950921 6345807 18.67 2483596 6465056 351784 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 875.97 553.52 19777.42 48.00 0.00 0.00 0.00 0.00 23:12:01 lo 11.86 11.86 1.12 1.12 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 vethfd0b249 1.27 1.70 0.15 0.17 0.00 0.00 0.00 0.00 23:13:01 vethedc9172 0.05 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:13:01 vetha512c3e 0.40 0.60 0.02 0.04 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 vethfd0b249 2.95 4.10 0.58 0.68 0.00 0.00 0.00 0.00 23:14:01 vethedc9172 0.42 0.48 0.05 0.94 0.00 0.00 0.00 0.00 23:14:01 vetha512c3e 91.40 91.00 16.00 18.59 0.00 0.00 0.00 0.00 23:15:01 docker0 136.81 188.57 8.64 1348.90 0.00 0.00 0.00 0.00 23:15:01 vethfd0b249 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 23:15:01 vethedc9172 0.50 0.53 0.05 1.20 0.00 0.00 0.00 0.00 23:15:01 vetha512c3e 32.81 32.69 4.11 8.19 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 vethfd0b249 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:16:01 vethedc9172 0.55 0.57 0.05 1.37 0.00 0.00 0.00 0.00 23:16:01 vetha512c3e 70.35 69.94 9.77 18.23 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 vetha512c3e 0.30 0.53 0.02 0.03 0.00 0.00 0.00 0.00 23:17:01 veth6b683e6 402.42 427.53 90.31 56.12 0.00 0.00 0.00 0.01 23:17:01 ens3 2214.33 1356.96 46740.17 180.53 0.00 0.00 0.00 0.00 Average: docker0 22.80 31.43 1.44 224.81 0.00 0.00 0.00 0.00 Average: vetha512c3e 32.54 32.46 4.99 7.51 0.00 0.00 0.00 0.00 Average: veth6b683e6 67.07 71.25 15.05 9.35 0.00 0.00 0.00 0.00 Average: ens3 312.48 189.95 7650.98 18.49 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21290) 06/14/25 _x86_64_ (8 CPU) 23:10:21 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 13.35 0.00 3.05 2.03 0.04 81.54 23:12:01 0 4.65 0.00 2.92 0.05 0.02 92.36 23:12:01 1 6.49 0.00 2.21 1.12 0.02 90.17 23:12:01 2 33.86 0.00 4.23 3.58 0.07 58.27 23:12:01 3 37.35 0.00 4.56 0.74 0.07 57.28 23:12:01 4 7.26 0.00 1.99 2.69 0.03 88.02 23:12:01 5 6.08 0.00 3.23 0.35 0.03 90.31 23:12:01 6 6.31 0.00 2.48 2.93 0.03 88.24 23:12:01 7 4.69 0.00 2.73 4.74 0.03 87.80 23:13:01 all 19.88 0.00 8.50 11.34 0.07 60.20 23:13:01 0 18.19 0.00 8.02 8.70 0.07 65.03 23:13:01 1 21.25 0.00 8.66 8.12 0.07 61.90 23:13:01 2 21.05 0.00 7.81 2.64 0.07 68.44 23:13:01 3 19.58 0.00 8.49 7.01 0.07 64.85 23:13:01 4 20.30 0.00 8.11 2.25 0.07 69.28 23:13:01 5 19.80 0.00 10.13 42.25 0.10 27.72 23:13:01 6 20.16 0.00 8.76 15.12 0.08 55.87 23:13:01 7 18.75 0.00 8.05 4.90 0.05 68.24 23:14:01 all 23.52 0.00 2.05 0.90 0.07 73.45 23:14:01 0 18.91 0.00 1.88 0.03 0.07 79.11 23:14:01 1 26.95 0.00 2.42 0.03 0.07 70.53 23:14:01 2 24.44 0.00 2.26 0.47 0.07 72.77 23:14:01 3 27.88 0.00 2.19 0.05 0.08 69.80 23:14:01 4 21.98 0.00 2.21 0.05 0.07 75.69 23:14:01 5 25.27 0.00 1.99 6.35 0.10 66.29 23:14:01 6 24.37 0.00 2.16 0.05 0.08 73.34 23:14:01 7 18.38 0.00 1.32 0.18 0.07 80.05 23:15:01 all 8.25 0.00 2.61 1.29 0.06 87.78 23:15:01 0 7.71 0.00 3.29 0.27 0.07 88.66 23:15:01 1 9.21 0.00 2.66 0.25 0.07 87.82 23:15:01 2 8.04 0.00 2.70 2.08 0.05 87.12 23:15:01 3 10.51 0.00 2.72 3.52 0.07 83.19 23:15:01 4 8.25 0.00 2.28 0.02 0.07 89.39 23:15:01 5 8.47 0.00 2.33 3.61 0.05 85.54 23:15:01 6 5.72 0.00 1.92 0.22 0.05 92.09 23:15:01 7 8.04 0.00 2.98 0.37 0.07 88.54 23:16:01 all 5.52 0.00 0.52 0.02 0.06 93.88 23:16:01 0 5.44 0.00 0.68 0.02 0.07 93.79 23:16:01 1 7.66 0.00 0.57 0.03 0.03 91.71 23:16:01 2 4.81 0.00 0.32 0.13 0.03 94.71 23:16:01 3 5.81 0.00 0.41 0.00 0.05 93.73 23:16:01 4 4.47 0.00 0.55 0.02 0.07 94.89 23:16:01 5 5.01 0.00 0.80 0.02 0.07 94.10 23:16:01 6 6.66 0.00 0.50 0.00 0.05 92.79 23:16:01 7 4.27 0.00 0.38 0.00 0.08 95.26 23:17:01 all 1.41 0.00 0.57 0.08 0.05 97.89 23:17:01 0 1.65 0.00 0.77 0.07 0.05 97.46 23:17:01 1 1.77 0.00 0.57 0.03 0.05 97.58 23:17:01 2 1.45 0.00 0.57 0.33 0.05 97.59 23:17:01 3 1.44 0.00 0.50 0.02 0.05 97.99 23:17:01 4 1.42 0.00 0.58 0.03 0.03 97.93 23:17:01 5 1.00 0.00 0.50 0.03 0.07 98.40 23:17:01 6 1.40 0.00 0.47 0.03 0.05 98.05 23:17:01 7 1.17 0.00 0.60 0.03 0.05 98.14 Average: all 11.97 0.00 2.87 2.59 0.06 82.51 Average: 0 9.40 0.00 2.92 1.51 0.06 86.11 Average: 1 12.20 0.00 2.84 1.59 0.05 83.32 Average: 2 15.60 0.00 2.97 1.54 0.06 79.84 Average: 3 17.06 0.00 3.13 1.88 0.06 77.87 Average: 4 10.60 0.00 2.61 0.84 0.06 85.90 Average: 5 10.91 0.00 3.14 8.66 0.07 77.22 Average: 6 10.76 0.00 2.70 3.04 0.06 83.43 Average: 7 9.20 0.00 2.67 1.70 0.06 86.38