23:10:53 Started by timer 23:10:53 Running as SYSTEM 23:10:53 [EnvInject] - Loading node environment variables. 23:10:53 Building remotely on prd-ubuntu1804-docker-8c-8g-20975 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:53 [ssh-agent] Looking for ssh-agent implementation... 23:10:53 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:53 $ ssh-agent 23:10:53 SSH_AUTH_SOCK=/tmp/ssh-NPJsoIw03hHq/agent.2069 23:10:53 SSH_AGENT_PID=2071 23:10:53 [ssh-agent] Started. 23:10:53 Running ssh-add (command line suppressed) 23:10:53 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13939384302638469769.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13939384302638469769.key) 23:10:53 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:53 The recommended git tool is: NONE 23:10:55 using credential onap-jenkins-ssh 23:10:55 Wiping out workspace first. 23:10:55 Cloning the remote Git repository 23:10:55 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:55 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:55 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:55 > git --version # timeout=10 23:10:55 > git --version # 'git version 2.17.1' 23:10:55 using GIT_SSH to set credentials Gerrit user 23:10:55 Verifying host key using manually-configured host key entries 23:10:55 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:55 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:55 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:10:56 Avoid second fetch 23:10:56 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:10:56 Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/remotes/origin/master) 23:10:56 > git config core.sparsecheckout # timeout=10 23:10:56 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 23:10:56 Commit message: "Remove VFC from docker compose and helm configurations" 23:10:56 > git rev-list --no-walk 1e361efcd8a4b3caab4f41f34078024e85ac9d73 # timeout=10 23:10:59 provisioning config files... 23:10:59 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:10:59 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:10:59 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8680403825507081097.sh 23:10:59 ---> python-tools-install.sh 23:10:59 Setup pyenv: 23:10:59 * system (set by /opt/pyenv/version) 23:10:59 * 3.8.13 (set by /opt/pyenv/version) 23:10:59 * 3.9.13 (set by /opt/pyenv/version) 23:10:59 * 3.10.6 (set by /opt/pyenv/version) 23:11:03 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-1fkA 23:11:03 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:07 lf-activate-venv(): INFO: Installing: lftools 23:11:32 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH 23:11:32 Generating Requirements File 23:11:51 Python 3.10.6 23:11:51 pip 25.1.1 from /tmp/venv-1fkA/lib/python3.10/site-packages/pip (python 3.10) 23:11:52 appdirs==1.4.4 23:11:52 argcomplete==3.6.2 23:11:52 aspy.yaml==1.3.0 23:11:52 attrs==25.3.0 23:11:52 autopage==0.5.2 23:11:52 beautifulsoup4==4.13.4 23:11:52 boto3==1.38.36 23:11:52 botocore==1.38.36 23:11:52 bs4==0.0.2 23:11:52 cachetools==5.5.2 23:11:52 certifi==2025.4.26 23:11:52 cffi==1.17.1 23:11:52 cfgv==3.4.0 23:11:52 chardet==5.2.0 23:11:52 charset-normalizer==3.4.2 23:11:52 click==8.2.1 23:11:52 cliff==4.10.0 23:11:52 cmd2==2.6.1 23:11:52 cryptography==3.3.2 23:11:52 debtcollector==3.0.0 23:11:52 decorator==5.2.1 23:11:52 defusedxml==0.7.1 23:11:52 Deprecated==1.2.18 23:11:52 distlib==0.3.9 23:11:52 dnspython==2.7.0 23:11:52 docker==7.1.0 23:11:52 dogpile.cache==1.4.0 23:11:52 durationpy==0.10 23:11:52 email_validator==2.2.0 23:11:52 filelock==3.18.0 23:11:52 future==1.0.0 23:11:52 gitdb==4.0.12 23:11:52 GitPython==3.1.44 23:11:52 google-auth==2.40.3 23:11:52 httplib2==0.22.0 23:11:52 identify==2.6.12 23:11:52 idna==3.10 23:11:52 importlib-resources==1.5.0 23:11:52 iso8601==2.1.0 23:11:52 Jinja2==3.1.6 23:11:52 jmespath==1.0.1 23:11:52 jsonpatch==1.33 23:11:52 jsonpointer==3.0.0 23:11:52 jsonschema==4.24.0 23:11:52 jsonschema-specifications==2025.4.1 23:11:52 keystoneauth1==5.11.1 23:11:52 kubernetes==33.1.0 23:11:52 lftools==0.37.13 23:11:52 lxml==5.4.0 23:11:52 MarkupSafe==3.0.2 23:11:52 msgpack==1.1.1 23:11:52 multi_key_dict==2.0.3 23:11:52 munch==4.0.0 23:11:52 netaddr==1.3.0 23:11:52 niet==1.4.2 23:11:52 nodeenv==1.9.1 23:11:52 oauth2client==4.1.3 23:11:52 oauthlib==3.2.2 23:11:52 openstacksdk==4.6.0 23:11:52 os-client-config==2.1.0 23:11:52 os-service-types==1.7.0 23:11:52 osc-lib==4.0.2 23:11:52 oslo.config==9.8.0 23:11:52 oslo.context==6.0.0 23:11:52 oslo.i18n==6.5.1 23:11:52 oslo.log==7.1.0 23:11:52 oslo.serialization==5.7.0 23:11:52 oslo.utils==9.0.0 23:11:52 packaging==25.0 23:11:52 pbr==6.1.1 23:11:52 platformdirs==4.3.8 23:11:52 prettytable==3.16.0 23:11:52 psutil==7.0.0 23:11:52 pyasn1==0.6.1 23:11:52 pyasn1_modules==0.4.2 23:11:52 pycparser==2.22 23:11:52 pygerrit2==2.0.15 23:11:52 PyGithub==2.6.1 23:11:52 PyJWT==2.10.1 23:11:52 PyNaCl==1.5.0 23:11:52 pyparsing==2.4.7 23:11:52 pyperclip==1.9.0 23:11:52 pyrsistent==0.20.0 23:11:52 python-cinderclient==9.7.0 23:11:52 python-dateutil==2.9.0.post0 23:11:52 python-heatclient==4.2.0 23:11:52 python-jenkins==1.8.2 23:11:52 python-keystoneclient==5.6.0 23:11:52 python-magnumclient==4.8.1 23:11:52 python-openstackclient==8.1.0 23:11:52 python-swiftclient==4.8.0 23:11:52 PyYAML==6.0.2 23:11:52 referencing==0.36.2 23:11:52 requests==2.32.4 23:11:52 requests-oauthlib==2.0.0 23:11:52 requestsexceptions==1.4.0 23:11:52 rfc3986==2.0.0 23:11:52 rpds-py==0.25.1 23:11:52 rsa==4.9.1 23:11:52 ruamel.yaml==0.18.14 23:11:52 ruamel.yaml.clib==0.2.12 23:11:52 s3transfer==0.13.0 23:11:52 simplejson==3.20.1 23:11:52 six==1.17.0 23:11:52 smmap==5.0.2 23:11:52 soupsieve==2.7 23:11:52 stevedore==5.4.1 23:11:52 tabulate==0.9.0 23:11:52 toml==0.10.2 23:11:52 tomlkit==0.13.3 23:11:52 tqdm==4.67.1 23:11:52 typing_extensions==4.14.0 23:11:52 tzdata==2025.2 23:11:52 urllib3==1.26.20 23:11:52 virtualenv==20.31.2 23:11:52 wcwidth==0.2.13 23:11:52 websocket-client==1.8.0 23:11:52 wrapt==1.17.2 23:11:52 xdg==6.0.0 23:11:52 xmltodict==0.14.2 23:11:52 yq==3.4.3 23:11:52 [EnvInject] - Injecting environment variables from a build step. 23:11:52 [EnvInject] - Injecting as environment variables the properties content 23:11:52 SET_JDK_VERSION=openjdk17 23:11:52 GIT_URL="git://cloud.onap.org/mirror" 23:11:52 23:11:52 [EnvInject] - Variables injected successfully. 23:11:52 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins1044944308728739529.sh 23:11:52 ---> update-java-alternatives.sh 23:11:52 ---> Updating Java version 23:11:52 ---> Ubuntu/Debian system detected 23:11:52 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:11:52 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:11:52 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:11:52 openjdk version "17.0.4" 2022-07-19 23:11:52 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:11:52 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:11:52 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:11:52 [EnvInject] - Injecting environment variables from a build step. 23:11:52 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:11:52 [EnvInject] - Variables injected successfully. 23:11:53 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins4039090778750723394.sh 23:11:53 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:11:53 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:11:53 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:11:53 Configure a credential helper to remove this warning. See 23:11:53 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:11:53 23:11:53 Login Succeeded 23:11:53 docker: 'compose' is not a docker command. 23:11:53 See 'docker --help' 23:11:53 Docker Compose Plugin not installed. Installing now... 23:11:53 % Total % Received % Xferd Average Speed Time Time Time Current 23:11:53 Dload Upload Total Spent Left Speed 23:11:53 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 23:11:54 1 60.2M 1 856k 0 0 3196k 0 0:00:19 --:--:-- 0:00:19 3196k 100 60.2M 100 60.2M 0 0 76.4M 0 --:--:-- --:--:-- --:--:-- 114M 23:11:54 Setting project configuration for: pap 23:11:54 Configuring docker compose... 23:11:56 Starting apex-pdp using postgres + Grafana/Prometheus 23:11:56 apex-pdp Pulling 23:11:56 kafka Pulling 23:11:56 pap Pulling 23:11:56 simulator Pulling 23:11:56 api Pulling 23:11:56 postgres Pulling 23:11:56 policy-db-migrator Pulling 23:11:56 prometheus Pulling 23:11:56 grafana Pulling 23:11:56 zookeeper Pulling 23:11:56 da9db072f522 Pulling fs layer 23:11:56 96e38c8865ba Pulling fs layer 23:11:56 e5d7009d9e55 Pulling fs layer 23:11:56 1ec5fb03eaee Pulling fs layer 23:11:56 d3165a332ae3 Pulling fs layer 23:11:56 c124ba1a8b26 Pulling fs layer 23:11:56 6394804c2196 Pulling fs layer 23:11:56 c124ba1a8b26 Waiting 23:11:56 1ec5fb03eaee Waiting 23:11:56 6394804c2196 Waiting 23:11:56 d3165a332ae3 Waiting 23:11:56 da9db072f522 Pulling fs layer 23:11:56 56aca8a42329 Pulling fs layer 23:11:56 fbe227156a9a Pulling fs layer 23:11:56 b56567b07821 Pulling fs layer 23:11:56 56aca8a42329 Waiting 23:11:56 f243361b999b Pulling fs layer 23:11:56 fbe227156a9a Waiting 23:11:56 7abf0dc59d35 Pulling fs layer 23:11:56 991de477d40a Pulling fs layer 23:11:56 f243361b999b Waiting 23:11:56 7abf0dc59d35 Waiting 23:11:56 5efc16ba9cdc Pulling fs layer 23:11:56 991de477d40a Waiting 23:11:56 5efc16ba9cdc Waiting 23:11:56 e5d7009d9e55 Downloading [==================================================>] 295B/295B 23:11:56 e5d7009d9e55 Verifying Checksum 23:11:56 e5d7009d9e55 Download complete 23:11:56 da9db072f522 Pulling fs layer 23:11:56 96e38c8865ba Pulling fs layer 23:11:56 5e06c6bed798 Pulling fs layer 23:11:56 684be6598fc9 Pulling fs layer 23:11:56 0d92cad902ba Pulling fs layer 23:11:56 dcc0c3b2850c Pulling fs layer 23:11:56 eb7cda286a15 Pulling fs layer 23:11:56 0d92cad902ba Waiting 23:11:56 5e06c6bed798 Waiting 23:11:56 684be6598fc9 Waiting 23:11:56 dcc0c3b2850c Waiting 23:11:56 eb7cda286a15 Waiting 23:11:56 da9db072f522 Downloading [> ] 48.06kB/3.624MB 23:11:56 da9db072f522 Downloading [> ] 48.06kB/3.624MB 23:11:56 da9db072f522 Downloading [> ] 48.06kB/3.624MB 23:11:56 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 23:11:56 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 23:11:56 1ec5fb03eaee Verifying Checksum 23:11:56 1ec5fb03eaee Download complete 23:11:56 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 23:11:56 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 23:11:56 d3165a332ae3 Download complete 23:11:56 da9db072f522 Pulling fs layer 23:11:56 4ba79830ebce Pulling fs layer 23:11:56 d223479d7367 Pulling fs layer 23:11:56 ece604b40811 Pulling fs layer 23:11:56 c01e672f2391 Pulling fs layer 23:11:56 da9db072f522 Downloading [> ] 48.06kB/3.624MB 23:11:56 4ba79830ebce Waiting 23:11:56 ece604b40811 Waiting 23:11:56 d223479d7367 Waiting 23:11:56 c01e672f2391 Waiting 23:11:56 da9db072f522 Pulling fs layer 23:11:56 e0a9246a993d Pulling fs layer 23:11:56 5179ab305f38 Pulling fs layer 23:11:56 18ce86a3284e Pulling fs layer 23:11:56 098efa8b34b7 Pulling fs layer 23:11:56 da9db072f522 Downloading [> ] 48.06kB/3.624MB 23:11:56 e0a9246a993d Waiting 23:11:56 5179ab305f38 Waiting 23:11:56 18ce86a3284e Waiting 23:11:56 098efa8b34b7 Waiting 23:11:56 614e034e242f Pulling fs layer 23:11:56 614e034e242f Waiting 23:11:56 f18232174bc9 Pulling fs layer 23:11:56 e60d9caeb0b8 Pulling fs layer 23:11:56 f61a19743345 Pulling fs layer 23:11:56 8af57d8c9f49 Pulling fs layer 23:11:56 e60d9caeb0b8 Waiting 23:11:56 f61a19743345 Waiting 23:11:56 f18232174bc9 Waiting 23:11:56 c53a11b7c6fc Pulling fs layer 23:11:56 e032d0a5e409 Pulling fs layer 23:11:56 c49e0ee60bfb Pulling fs layer 23:11:56 384497dbce3b Pulling fs layer 23:11:56 055b9255fa03 Pulling fs layer 23:11:56 b176d7edde70 Pulling fs layer 23:11:56 8af57d8c9f49 Waiting 23:11:56 c49e0ee60bfb Waiting 23:11:56 384497dbce3b Waiting 23:11:56 c53a11b7c6fc Waiting 23:11:56 055b9255fa03 Waiting 23:11:56 e032d0a5e409 Waiting 23:11:56 b176d7edde70 Waiting 23:11:56 1e017ebebdbd Pulling fs layer 23:11:56 55f2b468da67 Pulling fs layer 23:11:56 82bfc142787e Pulling fs layer 23:11:56 46baca71a4ef Pulling fs layer 23:11:56 b0e0ef7895f4 Pulling fs layer 23:11:56 c0c90eeb8aca Pulling fs layer 23:11:56 5cfb27c10ea5 Pulling fs layer 23:11:56 40a5eed61bb0 Pulling fs layer 23:11:56 e040ea11fa10 Pulling fs layer 23:11:56 09d5a3f70313 Pulling fs layer 23:11:56 356f5c2c843b Pulling fs layer 23:11:56 1e017ebebdbd Waiting 23:11:56 55f2b468da67 Waiting 23:11:56 5cfb27c10ea5 Waiting 23:11:56 82bfc142787e Waiting 23:11:56 40a5eed61bb0 Waiting 23:11:56 46baca71a4ef Waiting 23:11:56 b0e0ef7895f4 Waiting 23:11:56 e040ea11fa10 Waiting 23:11:56 c0c90eeb8aca Waiting 23:11:56 09d5a3f70313 Waiting 23:11:56 356f5c2c843b Waiting 23:11:56 2d429b9e73a6 Pulling fs layer 23:11:56 46eab5b44a35 Pulling fs layer 23:11:56 c4d302cc468d Pulling fs layer 23:11:56 01e0882c90d9 Pulling fs layer 23:11:56 531ee2cf3c0c Pulling fs layer 23:11:56 ed54a7dee1d8 Pulling fs layer 23:11:56 12c5c803443f Pulling fs layer 23:11:56 e27c75a98748 Pulling fs layer 23:11:56 e73cb4a42719 Pulling fs layer 23:11:56 a83b68436f09 Pulling fs layer 23:11:56 787d6bee9571 Pulling fs layer 23:11:56 13ff0988aaea Pulling fs layer 23:11:56 4b82842ab819 Pulling fs layer 23:11:56 7e568a0dc8fb Pulling fs layer 23:11:56 2d429b9e73a6 Waiting 23:11:56 46eab5b44a35 Waiting 23:11:56 c4d302cc468d Waiting 23:11:56 01e0882c90d9 Waiting 23:11:56 531ee2cf3c0c Waiting 23:11:56 ed54a7dee1d8 Waiting 23:11:56 12c5c803443f Waiting 23:11:56 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 23:11:56 e27c75a98748 Waiting 23:11:56 e73cb4a42719 Waiting 23:11:56 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 23:11:56 a83b68436f09 Waiting 23:11:56 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Verifying Checksum 23:11:56 787d6bee9571 Waiting 23:11:56 13ff0988aaea Waiting 23:11:56 da9db072f522 Verifying Checksum 23:11:56 da9db072f522 Verifying Checksum 23:11:56 da9db072f522 Verifying Checksum 23:11:56 da9db072f522 Verifying Checksum 23:11:56 da9db072f522 Download complete 23:11:56 4b82842ab819 Waiting 23:11:56 7e568a0dc8fb Waiting 23:11:56 da9db072f522 Download complete 23:11:56 da9db072f522 Download complete 23:11:56 da9db072f522 Download complete 23:11:56 da9db072f522 Download complete 23:11:56 eca0188f477e Pulling fs layer 23:11:56 e444bcd4d577 Pulling fs layer 23:11:56 eabd8714fec9 Pulling fs layer 23:11:56 45fd2fec8a19 Pulling fs layer 23:11:56 8f10199ed94b Pulling fs layer 23:11:56 f963a77d2726 Pulling fs layer 23:11:56 f3a82e9f1761 Pulling fs layer 23:11:56 79161a3f5362 Pulling fs layer 23:11:56 9c266ba63f51 Pulling fs layer 23:11:56 2e8a7df9c2ee Pulling fs layer 23:11:56 10f05dd8b1db Pulling fs layer 23:11:56 41dac8b43ba6 Pulling fs layer 23:11:56 71a9f6a9ab4d Pulling fs layer 23:11:56 da3ed5db7103 Pulling fs layer 23:11:56 c955f6e31a04 Pulling fs layer 23:11:56 8f10199ed94b Waiting 23:11:56 f3a82e9f1761 Waiting 23:11:56 f963a77d2726 Waiting 23:11:56 79161a3f5362 Waiting 23:11:56 9c266ba63f51 Waiting 23:11:56 2e8a7df9c2ee Waiting 23:11:56 10f05dd8b1db Waiting 23:11:56 41dac8b43ba6 Waiting 23:11:56 71a9f6a9ab4d Waiting 23:11:56 da3ed5db7103 Waiting 23:11:56 c955f6e31a04 Waiting 23:11:56 eabd8714fec9 Waiting 23:11:56 eca0188f477e Waiting 23:11:56 45fd2fec8a19 Waiting 23:11:56 e444bcd4d577 Waiting 23:11:56 da9db072f522 Extracting [> ] 65.54kB/3.624MB 23:11:56 da9db072f522 Extracting [> ] 65.54kB/3.624MB 23:11:56 da9db072f522 Extracting [> ] 65.54kB/3.624MB 23:11:56 da9db072f522 Extracting [> ] 65.54kB/3.624MB 23:11:56 da9db072f522 Extracting [> ] 65.54kB/3.624MB 23:11:56 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 23:11:56 9fa9226be034 Pulling fs layer 23:11:56 1617e25568b2 Pulling fs layer 23:11:56 6ac0e4adf315 Pulling fs layer 23:11:56 f3b09c502777 Pulling fs layer 23:11:56 408012a7b118 Pulling fs layer 23:11:56 44986281b8b9 Pulling fs layer 23:11:56 bf70c5107ab5 Pulling fs layer 23:11:56 1ccde423731d Pulling fs layer 23:11:56 7221d93db8a9 Pulling fs layer 23:11:56 7df673c7455d Pulling fs layer 23:11:56 408012a7b118 Waiting 23:11:56 44986281b8b9 Waiting 23:11:56 bf70c5107ab5 Waiting 23:11:56 1ccde423731d Waiting 23:11:56 7221d93db8a9 Waiting 23:11:56 7df673c7455d Waiting 23:11:56 9fa9226be034 Waiting 23:11:56 1617e25568b2 Waiting 23:11:56 6ac0e4adf315 Waiting 23:11:56 f3b09c502777 Waiting 23:11:56 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 23:11:56 6394804c2196 Verifying Checksum 23:11:56 6394804c2196 Download complete 23:11:56 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 23:11:56 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 23:11:56 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 23:11:56 da9db072f522 Extracting [==================> ] 1.311MB/3.624MB 23:11:56 da9db072f522 Extracting [==================> ] 1.311MB/3.624MB 23:11:56 da9db072f522 Extracting [==================> ] 1.311MB/3.624MB 23:11:56 da9db072f522 Extracting [==================> ] 1.311MB/3.624MB 23:11:56 da9db072f522 Extracting [==================> ] 1.311MB/3.624MB 23:11:56 c124ba1a8b26 Downloading [======> ] 11.35MB/91.87MB 23:11:56 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 23:11:56 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 23:11:56 56aca8a42329 Downloading [===> ] 5.406MB/71.91MB 23:11:56 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 23:11:56 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 23:11:56 c124ba1a8b26 Downloading [==============> ] 27.03MB/91.87MB 23:11:56 da9db072f522 Pull complete 23:11:56 da9db072f522 Pull complete 23:11:56 da9db072f522 Pull complete 23:11:56 da9db072f522 Pull complete 23:11:56 da9db072f522 Pull complete 23:11:56 96e38c8865ba Downloading [===============================> ] 45.42MB/71.91MB 23:11:56 96e38c8865ba Downloading [===============================> ] 45.42MB/71.91MB 23:11:56 56aca8a42329 Downloading [=======> ] 11.35MB/71.91MB 23:11:56 c124ba1a8b26 Downloading [=======================> ] 42.71MB/91.87MB 23:11:56 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB 23:11:56 96e38c8865ba Downloading [==========================================> ] 61.64MB/71.91MB 23:11:56 56aca8a42329 Downloading [=============> ] 18.92MB/71.91MB 23:11:56 c124ba1a8b26 Downloading [================================> ] 58.93MB/91.87MB 23:11:56 96e38c8865ba Verifying Checksum 23:11:56 96e38c8865ba Download complete 23:11:56 96e38c8865ba Download complete 23:11:57 fbe227156a9a Downloading [> ] 146.4kB/14.63MB 23:11:57 56aca8a42329 Downloading [==================> ] 27.03MB/71.91MB 23:11:57 c124ba1a8b26 Downloading [=========================================> ] 76.23MB/91.87MB 23:11:57 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 23:11:57 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 23:11:57 fbe227156a9a Downloading [==========================> ] 7.814MB/14.63MB 23:11:57 56aca8a42329 Downloading [=========================> ] 36.76MB/71.91MB 23:11:57 c124ba1a8b26 Downloading [=================================================> ] 91.37MB/91.87MB 23:11:57 c124ba1a8b26 Verifying Checksum 23:11:57 c124ba1a8b26 Download complete 23:11:57 fbe227156a9a Verifying Checksum 23:11:57 fbe227156a9a Download complete 23:11:57 b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB 23:11:57 b56567b07821 Verifying Checksum 23:11:57 b56567b07821 Download complete 23:11:57 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 23:11:57 7abf0dc59d35 Download complete 23:11:57 f243361b999b Downloading [============================> ] 3.003kB/5.242kB 23:11:57 f243361b999b Downloading [==================================================>] 5.242kB/5.242kB 23:11:57 f243361b999b Verifying Checksum 23:11:57 f243361b999b Download complete 23:11:57 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 23:11:57 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 23:11:57 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 23:11:57 991de477d40a Verifying Checksum 23:11:57 991de477d40a Download complete 23:11:57 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 23:11:57 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 23:11:57 5efc16ba9cdc Verifying Checksum 23:11:57 56aca8a42329 Downloading [====================================> ] 51.9MB/71.91MB 23:11:57 5efc16ba9cdc Download complete 23:11:57 5e06c6bed798 Downloading [==================================================>] 296B/296B 23:11:57 5e06c6bed798 Download complete 23:11:57 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 23:11:57 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 23:11:57 684be6598fc9 Verifying Checksum 23:11:57 684be6598fc9 Download complete 23:11:57 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 23:11:57 0d92cad902ba Verifying Checksum 23:11:57 0d92cad902ba Download complete 23:11:57 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 23:11:57 eb7cda286a15 Verifying Checksum 23:11:57 eb7cda286a15 Download complete 23:11:57 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 23:11:57 4ba79830ebce Downloading [> ] 539.6kB/166.8MB 23:11:57 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 23:11:57 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 23:11:57 56aca8a42329 Downloading [================================================> ] 69.75MB/71.91MB 23:11:57 56aca8a42329 Verifying Checksum 23:11:57 56aca8a42329 Download complete 23:11:57 d223479d7367 Downloading [> ] 80.82kB/6.742MB 23:11:57 dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB 23:11:57 4ba79830ebce Downloading [===> ] 10.81MB/166.8MB 23:11:57 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 23:11:57 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 23:11:57 d223479d7367 Downloading [=======================================> ] 5.324MB/6.742MB 23:11:57 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 23:11:57 dcc0c3b2850c Downloading [=============> ] 20.54MB/76.12MB 23:11:57 d223479d7367 Verifying Checksum 23:11:57 d223479d7367 Download complete 23:11:57 4ba79830ebce Downloading [=======> ] 24.87MB/166.8MB 23:11:57 ece604b40811 Downloading [==================================================>] 303B/303B 23:11:57 ece604b40811 Verifying Checksum 23:11:57 ece604b40811 Download complete 23:11:57 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 23:11:57 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 23:11:57 c01e672f2391 Downloading [> ] 539.6kB/263.6MB 23:11:57 dcc0c3b2850c Downloading [========================> ] 36.76MB/76.12MB 23:11:57 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB 23:11:57 4ba79830ebce Downloading [============> ] 42.17MB/166.8MB 23:11:57 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 23:11:57 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 23:11:57 dcc0c3b2850c Downloading [==================================> ] 52.98MB/76.12MB 23:11:57 56aca8a42329 Extracting [======> ] 8.913MB/71.91MB 23:11:57 c01e672f2391 Downloading [> ] 2.702MB/263.6MB 23:11:57 4ba79830ebce Downloading [=================> ] 58.39MB/166.8MB 23:11:57 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 23:11:57 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 23:11:57 dcc0c3b2850c Downloading [============================================> ] 67.58MB/76.12MB 23:11:57 c01e672f2391 Downloading [> ] 4.324MB/263.6MB 23:11:57 4ba79830ebce Downloading [======================> ] 74.61MB/166.8MB 23:11:57 56aca8a42329 Extracting [==========> ] 14.48MB/71.91MB 23:11:57 dcc0c3b2850c Verifying Checksum 23:11:57 dcc0c3b2850c Download complete 23:11:57 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 23:11:57 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 23:11:57 e0a9246a993d Downloading [> ] 539.6kB/71.91MB 23:11:57 4ba79830ebce Downloading [===========================> ] 90.83MB/166.8MB 23:11:57 56aca8a42329 Extracting [=============> ] 19.5MB/71.91MB 23:11:57 c01e672f2391 Downloading [=> ] 8.109MB/263.6MB 23:11:57 e0a9246a993d Downloading [=======> ] 10.27MB/71.91MB 23:11:57 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 23:11:57 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 23:11:58 4ba79830ebce Downloading [================================> ] 107.6MB/166.8MB 23:11:58 56aca8a42329 Extracting [==================> ] 26.74MB/71.91MB 23:11:58 c01e672f2391 Downloading [==> ] 10.81MB/263.6MB 23:11:58 e0a9246a993d Downloading [=============> ] 19.46MB/71.91MB 23:11:58 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 23:11:58 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 23:11:58 4ba79830ebce Downloading [=====================================> ] 124.4MB/166.8MB 23:11:58 56aca8a42329 Extracting [======================> ] 32.31MB/71.91MB 23:11:58 c01e672f2391 Downloading [==> ] 14.06MB/263.6MB 23:11:58 e0a9246a993d Downloading [=====================> ] 30.82MB/71.91MB 23:11:58 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 23:11:58 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 23:11:58 4ba79830ebce Downloading [==========================================> ] 142.7MB/166.8MB 23:11:58 56aca8a42329 Extracting [==========================> ] 37.88MB/71.91MB 23:11:58 c01e672f2391 Downloading [====> ] 22.71MB/263.6MB 23:11:58 e0a9246a993d Downloading [==============================> ] 43.25MB/71.91MB 23:11:58 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 23:11:58 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 23:11:58 4ba79830ebce Downloading [==============================================> ] 156.3MB/166.8MB 23:11:58 56aca8a42329 Extracting [=============================> ] 42.89MB/71.91MB 23:11:58 c01e672f2391 Downloading [======> ] 32.44MB/263.6MB 23:11:58 4ba79830ebce Verifying Checksum 23:11:58 4ba79830ebce Download complete 23:11:58 e0a9246a993d Downloading [======================================> ] 55.15MB/71.91MB 23:11:58 5179ab305f38 Downloading [==================================================>] 306B/306B 23:11:58 5179ab305f38 Verifying Checksum 23:11:58 5179ab305f38 Download complete 23:11:58 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 23:11:58 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 23:11:58 18ce86a3284e Downloading [> ] 539.6kB/182.3MB 23:11:58 56aca8a42329 Extracting [================================> ] 46.79MB/71.91MB 23:11:58 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 23:11:58 c01e672f2391 Downloading [========> ] 44.87MB/263.6MB 23:11:58 e0a9246a993d Downloading [=================================================> ] 70.83MB/71.91MB 23:11:58 e0a9246a993d Verifying Checksum 23:11:58 e0a9246a993d Download complete 23:11:58 098efa8b34b7 Downloading [==================================================>] 1.154kB/1.154kB 23:11:58 098efa8b34b7 Verifying Checksum 23:11:58 098efa8b34b7 Download complete 23:11:58 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 23:11:58 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 23:11:58 614e034e242f Downloading [==================================================>] 1.126kB/1.126kB 23:11:58 614e034e242f Verifying Checksum 23:11:58 614e034e242f Download complete 23:11:58 18ce86a3284e Downloading [> ] 2.702MB/182.3MB 23:11:58 56aca8a42329 Extracting [===================================> ] 51.25MB/71.91MB 23:11:58 c01e672f2391 Downloading [===========> ] 60.01MB/263.6MB 23:11:58 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 23:11:58 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB 23:11:58 f18232174bc9 Verifying Checksum 23:11:58 f18232174bc9 Download complete 23:11:58 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 23:11:58 e0a9246a993d Extracting [> ] 557.1kB/71.91MB 23:11:58 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB 23:11:58 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB 23:11:58 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 23:11:58 e60d9caeb0b8 Download complete 23:11:58 18ce86a3284e Downloading [=> ] 5.406MB/182.3MB 23:11:58 f61a19743345 Downloading [> ] 48.06kB/3.524MB 23:11:58 56aca8a42329 Extracting [=====================================> ] 54.03MB/71.91MB 23:11:58 c01e672f2391 Downloading [==============> ] 74.07MB/263.6MB 23:11:58 4ba79830ebce Extracting [====> ] 14.48MB/166.8MB 23:11:58 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 23:11:58 f61a19743345 Verifying Checksum 23:11:58 f61a19743345 Download complete 23:11:58 e0a9246a993d Extracting [==> ] 3.899MB/71.91MB 23:11:58 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 23:11:58 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 23:11:58 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 23:11:58 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 23:11:58 18ce86a3284e Downloading [==> ] 8.109MB/182.3MB 23:11:58 c01e672f2391 Downloading [================> ] 89.21MB/263.6MB 23:11:58 4ba79830ebce Extracting [=======> ] 23.4MB/166.8MB 23:11:58 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 23:11:58 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 23:11:58 56aca8a42329 Extracting [=======================================> ] 56.26MB/71.91MB 23:11:58 e0a9246a993d Extracting [====> ] 6.685MB/71.91MB 23:11:58 f18232174bc9 Extracting [=============================================> ] 3.342MB/3.642MB 23:11:58 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 23:11:58 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 23:11:58 8af57d8c9f49 Downloading [===> ] 687kB/8.735MB 23:11:58 18ce86a3284e Downloading [==> ] 10.81MB/182.3MB 23:11:58 c01e672f2391 Downloading [===================> ] 101.1MB/263.6MB 23:11:58 4ba79830ebce Extracting [=========> ] 31.2MB/166.8MB 23:11:58 f18232174bc9 Pull complete 23:11:58 96e38c8865ba Pull complete 23:11:58 96e38c8865ba Pull complete 23:11:58 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 23:11:58 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 23:11:58 5e06c6bed798 Extracting [==================================================>] 296B/296B 23:11:58 e5d7009d9e55 Extracting [==================================================>] 295B/295B 23:11:58 56aca8a42329 Extracting [=========================================> ] 59.6MB/71.91MB 23:11:58 5e06c6bed798 Extracting [==================================================>] 296B/296B 23:11:58 e5d7009d9e55 Extracting [==================================================>] 295B/295B 23:11:58 e0a9246a993d Extracting [=======> ] 11.14MB/71.91MB 23:11:58 8af57d8c9f49 Downloading [========> ] 1.473MB/8.735MB 23:11:58 c01e672f2391 Downloading [======================> ] 118.4MB/263.6MB 23:11:58 18ce86a3284e Downloading [====> ] 14.6MB/182.3MB 23:11:58 4ba79830ebce Extracting [============> ] 40.67MB/166.8MB 23:11:59 56aca8a42329 Extracting [===========================================> ] 62.39MB/71.91MB 23:11:59 e60d9caeb0b8 Pull complete 23:11:59 f61a19743345 Extracting [> ] 65.54kB/3.524MB 23:11:59 e0a9246a993d Extracting [=========> ] 13.93MB/71.91MB 23:11:59 5e06c6bed798 Pull complete 23:11:59 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 23:11:59 8af57d8c9f49 Downloading [==============> ] 2.457MB/8.735MB 23:11:59 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 23:11:59 c01e672f2391 Downloading [=========================> ] 134.6MB/263.6MB 23:11:59 e5d7009d9e55 Pull complete 23:11:59 18ce86a3284e Downloading [=====> ] 18.92MB/182.3MB 23:11:59 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 23:11:59 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 23:11:59 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 23:11:59 4ba79830ebce Extracting [===============> ] 52.36MB/166.8MB 23:11:59 56aca8a42329 Extracting [=============================================> ] 65.73MB/71.91MB 23:11:59 f61a19743345 Extracting [=========> ] 655.4kB/3.524MB 23:11:59 e0a9246a993d Extracting [============> ] 17.27MB/71.91MB 23:11:59 8af57d8c9f49 Verifying Checksum 23:11:59 8af57d8c9f49 Download complete 23:11:59 18ce86a3284e Downloading [=======> ] 27.03MB/182.3MB 23:11:59 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 23:11:59 c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB 23:11:59 c53a11b7c6fc Verifying Checksum 23:11:59 c53a11b7c6fc Download complete 23:11:59 c01e672f2391 Downloading [============================> ] 148.1MB/263.6MB 23:11:59 4ba79830ebce Extracting [==================> ] 61.83MB/166.8MB 23:11:59 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 23:11:59 e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB 23:11:59 e032d0a5e409 Verifying Checksum 23:11:59 e032d0a5e409 Download complete 23:11:59 684be6598fc9 Pull complete 23:11:59 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 23:11:59 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 23:11:59 56aca8a42329 Extracting [===============================================> ] 68.52MB/71.91MB 23:11:59 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 23:11:59 1ec5fb03eaee Pull complete 23:11:59 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 23:11:59 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 23:11:59 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 23:11:59 e0a9246a993d Extracting [===============> ] 21.73MB/71.91MB 23:11:59 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 23:11:59 18ce86a3284e Downloading [=========> ] 36.22MB/182.3MB 23:11:59 4ba79830ebce Extracting [====================> ] 67.96MB/166.8MB 23:11:59 c01e672f2391 Downloading [==============================> ] 159MB/263.6MB 23:11:59 56aca8a42329 Extracting [=================================================> ] 71.3MB/71.91MB 23:11:59 f61a19743345 Pull complete 23:11:59 e0a9246a993d Extracting [=================> ] 25.07MB/71.91MB 23:11:59 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 23:11:59 0d92cad902ba Pull complete 23:11:59 c49e0ee60bfb Downloading [=> ] 2.702MB/107.3MB 23:11:59 18ce86a3284e Downloading [=============> ] 47.58MB/182.3MB 23:11:59 4ba79830ebce Extracting [=======================> ] 76.87MB/166.8MB 23:11:59 c01e672f2391 Downloading [=================================> ] 175.7MB/263.6MB 23:11:59 d3165a332ae3 Pull complete 23:11:59 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 23:11:59 e0a9246a993d Extracting [====================> ] 28.97MB/71.91MB 23:11:59 18ce86a3284e Downloading [================> ] 58.93MB/182.3MB 23:11:59 c49e0ee60bfb Downloading [===> ] 7.568MB/107.3MB 23:11:59 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 23:11:59 4ba79830ebce Extracting [========================> ] 81.89MB/166.8MB 23:11:59 c01e672f2391 Downloading [====================================> ] 190.3MB/263.6MB 23:11:59 56aca8a42329 Pull complete 23:11:59 fbe227156a9a Extracting [> ] 163.8kB/14.63MB 23:11:59 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 23:11:59 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 23:11:59 e0a9246a993d Extracting [======================> ] 32.31MB/71.91MB 23:11:59 18ce86a3284e Downloading [===================> ] 71.91MB/182.3MB 23:11:59 c49e0ee60bfb Downloading [======> ] 12.98MB/107.3MB 23:11:59 4ba79830ebce Extracting [==========================> ] 86.9MB/166.8MB 23:11:59 c01e672f2391 Downloading [======================================> ] 203.8MB/263.6MB 23:11:59 8af57d8c9f49 Extracting [====================> ] 3.539MB/8.735MB 23:11:59 c124ba1a8b26 Extracting [===> ] 7.242MB/91.87MB 23:11:59 fbe227156a9a Extracting [=> ] 327.7kB/14.63MB 23:11:59 dcc0c3b2850c Extracting [===> ] 5.571MB/76.12MB 23:11:59 18ce86a3284e Downloading [======================> ] 81.64MB/182.3MB 23:11:59 e0a9246a993d Extracting [========================> ] 35.65MB/71.91MB 23:11:59 4ba79830ebce Extracting [===========================> ] 90.24MB/166.8MB 23:11:59 c49e0ee60bfb Downloading [==========> ] 22.17MB/107.3MB 23:11:59 c01e672f2391 Downloading [========================================> ] 213MB/263.6MB 23:11:59 8af57d8c9f49 Extracting [=====================================> ] 6.488MB/8.735MB 23:11:59 c124ba1a8b26 Extracting [========> ] 15.04MB/91.87MB 23:11:59 fbe227156a9a Extracting [============> ] 3.768MB/14.63MB 23:11:59 dcc0c3b2850c Extracting [=======> ] 11.7MB/76.12MB 23:11:59 18ce86a3284e Downloading [========================> ] 89.75MB/182.3MB 23:11:59 e0a9246a993d Extracting [==========================> ] 37.88MB/71.91MB 23:11:59 c49e0ee60bfb Downloading [==============> ] 31.9MB/107.3MB 23:11:59 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 23:11:59 c01e672f2391 Downloading [=========================================> ] 219.5MB/263.6MB 23:11:59 4ba79830ebce Extracting [===========================> ] 93.03MB/166.8MB 23:11:59 c124ba1a8b26 Extracting [===========> ] 21.17MB/91.87MB 23:11:59 fbe227156a9a Extracting [=================> ] 5.243MB/14.63MB 23:11:59 8af57d8c9f49 Pull complete 23:11:59 dcc0c3b2850c Extracting [============> ] 18.38MB/76.12MB 23:11:59 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 23:11:59 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 23:11:59 18ce86a3284e Downloading [===========================> ] 100.6MB/182.3MB 23:11:59 e0a9246a993d Extracting [===========================> ] 40.11MB/71.91MB 23:11:59 c49e0ee60bfb Downloading [=====================> ] 45.42MB/107.3MB 23:11:59 c01e672f2391 Downloading [===========================================> ] 228.7MB/263.6MB 23:11:59 c124ba1a8b26 Extracting [==============> ] 27.3MB/91.87MB 23:11:59 4ba79830ebce Extracting [============================> ] 95.81MB/166.8MB 23:11:59 fbe227156a9a Extracting [=======================> ] 6.881MB/14.63MB 23:11:59 dcc0c3b2850c Extracting [================> ] 24.51MB/76.12MB 23:12:00 18ce86a3284e Downloading [==============================> ] 110.8MB/182.3MB 23:12:00 c49e0ee60bfb Downloading [============================> ] 61.64MB/107.3MB 23:12:00 c01e672f2391 Downloading [=============================================> ] 241.1MB/263.6MB 23:12:00 c53a11b7c6fc Pull complete 23:12:00 e0a9246a993d Extracting [=============================> ] 42.89MB/71.91MB 23:12:00 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 23:12:00 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 23:12:00 4ba79830ebce Extracting [=============================> ] 98.6MB/166.8MB 23:12:00 c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB 23:12:00 dcc0c3b2850c Extracting [====================> ] 31.75MB/76.12MB 23:12:00 fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB 23:12:00 18ce86a3284e Downloading [=================================> ] 122.2MB/182.3MB 23:12:00 c01e672f2391 Downloading [===============================================> ] 252MB/263.6MB 23:12:00 c49e0ee60bfb Downloading [================================> ] 70.29MB/107.3MB 23:12:00 c124ba1a8b26 Extracting [=====================> ] 39.55MB/91.87MB 23:12:00 dcc0c3b2850c Extracting [========================> ] 36.77MB/76.12MB 23:12:00 4ba79830ebce Extracting [==============================> ] 101.9MB/166.8MB 23:12:00 e0a9246a993d Extracting [================================> ] 46.24MB/71.91MB 23:12:00 fbe227156a9a Extracting [=====================================> ] 10.98MB/14.63MB 23:12:00 18ce86a3284e Downloading [====================================> ] 131.4MB/182.3MB 23:12:00 c49e0ee60bfb Downloading [=====================================> ] 81.1MB/107.3MB 23:12:00 c01e672f2391 Downloading [=================================================> ] 260.6MB/263.6MB 23:12:00 e032d0a5e409 Pull complete 23:12:00 c01e672f2391 Verifying Checksum 23:12:00 c01e672f2391 Download complete 23:12:00 dcc0c3b2850c Extracting [===========================> ] 41.78MB/76.12MB 23:12:00 4ba79830ebce Extracting [===============================> ] 105.3MB/166.8MB 23:12:00 e0a9246a993d Extracting [=================================> ] 48.46MB/71.91MB 23:12:00 c124ba1a8b26 Extracting [=========================> ] 46.79MB/91.87MB 23:12:00 384497dbce3b Downloading [> ] 539.6kB/63.48MB 23:12:00 18ce86a3284e Downloading [======================================> ] 142.2MB/182.3MB 23:12:00 fbe227156a9a Extracting [=========================================> ] 12.12MB/14.63MB 23:12:00 c49e0ee60bfb Downloading [===========================================> ] 92.45MB/107.3MB 23:12:00 fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 23:12:00 dcc0c3b2850c Extracting [===============================> ] 48.46MB/76.12MB 23:12:00 4ba79830ebce Extracting [================================> ] 107.5MB/166.8MB 23:12:00 c124ba1a8b26 Extracting [============================> ] 52.92MB/91.87MB 23:12:00 e0a9246a993d Extracting [===================================> ] 51.25MB/71.91MB 23:12:00 384497dbce3b Downloading [======> ] 8.109MB/63.48MB 23:12:00 18ce86a3284e Downloading [=========================================> ] 153MB/182.3MB 23:12:00 c49e0ee60bfb Downloading [================================================> ] 104.3MB/107.3MB 23:12:00 c49e0ee60bfb Verifying Checksum 23:12:00 c49e0ee60bfb Download complete 23:12:00 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 23:12:00 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 23:12:00 055b9255fa03 Verifying Checksum 23:12:00 055b9255fa03 Download complete 23:12:00 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 23:12:00 b176d7edde70 Verifying Checksum 23:12:00 dcc0c3b2850c Extracting [====================================> ] 56.26MB/76.12MB 23:12:00 c124ba1a8b26 Extracting [=================================> ] 61.83MB/91.87MB 23:12:00 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 23:12:00 e0a9246a993d Extracting [=====================================> ] 54.59MB/71.91MB 23:12:00 384497dbce3b Downloading [==============> ] 18.92MB/63.48MB 23:12:00 18ce86a3284e Downloading [=============================================> ] 165.4MB/182.3MB 23:12:00 4ba79830ebce Extracting [=================================> ] 110.3MB/166.8MB 23:12:00 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 23:12:00 dcc0c3b2850c Extracting [=========================================> ] 63.5MB/76.12MB 23:12:00 c124ba1a8b26 Extracting [====================================> ] 67.96MB/91.87MB 23:12:00 1e017ebebdbd Downloading [======> ] 4.898MB/37.19MB 23:12:00 384497dbce3b Downloading [======================> ] 28.65MB/63.48MB 23:12:00 e0a9246a993d Extracting [=======================================> ] 56.26MB/71.91MB 23:12:00 18ce86a3284e Downloading [================================================> ] 175.7MB/182.3MB 23:12:00 4ba79830ebce Extracting [=================================> ] 112.5MB/166.8MB 23:12:00 fbe227156a9a Pull complete 23:12:00 18ce86a3284e Verifying Checksum 23:12:00 18ce86a3284e Download complete 23:12:00 c49e0ee60bfb Extracting [=> ] 2.785MB/107.3MB 23:12:00 c124ba1a8b26 Extracting [=======================================> ] 72.97MB/91.87MB 23:12:00 1e017ebebdbd Downloading [==========> ] 7.912MB/37.19MB 23:12:00 384497dbce3b Downloading [================================> ] 41.09MB/63.48MB 23:12:00 dcc0c3b2850c Extracting [=============================================> ] 69.07MB/76.12MB 23:12:00 e0a9246a993d Extracting [=========================================> ] 59.6MB/71.91MB 23:12:00 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 23:12:00 4ba79830ebce Extracting [==================================> ] 116.4MB/166.8MB 23:12:00 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 23:12:00 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 23:12:00 c49e0ee60bfb Extracting [==> ] 4.456MB/107.3MB 23:12:00 1e017ebebdbd Downloading [=========================> ] 18.84MB/37.19MB 23:12:00 384497dbce3b Downloading [==========================================> ] 53.53MB/63.48MB 23:12:00 c124ba1a8b26 Extracting [==========================================> ] 77.99MB/91.87MB 23:12:00 dcc0c3b2850c Extracting [================================================> ] 74.09MB/76.12MB 23:12:00 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB 23:12:00 e0a9246a993d Extracting [===========================================> ] 62.95MB/71.91MB 23:12:00 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 23:12:00 4ba79830ebce Extracting [===================================> ] 118.7MB/166.8MB 23:12:00 384497dbce3b Verifying Checksum 23:12:00 384497dbce3b Download complete 23:12:00 c49e0ee60bfb Extracting [===> ] 6.685MB/107.3MB 23:12:00 1e017ebebdbd Downloading [=========================================> ] 30.9MB/37.19MB 23:12:00 dcc0c3b2850c Pull complete 23:12:00 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 23:12:00 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 23:12:00 82bfc142787e Downloading [> ] 97.22kB/8.613MB 23:12:00 c124ba1a8b26 Extracting [=============================================> ] 83.56MB/91.87MB 23:12:00 b56567b07821 Pull complete 23:12:00 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 23:12:00 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 23:12:00 55f2b468da67 Downloading [===> ] 17.84MB/257.9MB 23:12:00 e0a9246a993d Extracting [==============================================> ] 66.29MB/71.91MB 23:12:00 1e017ebebdbd Verifying Checksum 23:12:00 1e017ebebdbd Download complete 23:12:00 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 23:12:00 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 23:12:00 46baca71a4ef Verifying Checksum 23:12:00 46baca71a4ef Download complete 23:12:00 4ba79830ebce Extracting [====================================> ] 121.4MB/166.8MB 23:12:01 c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB 23:12:01 82bfc142787e Downloading [==============> ] 2.457MB/8.613MB 23:12:01 c124ba1a8b26 Extracting [=================================================> ] 90.24MB/91.87MB 23:12:01 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 23:12:01 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 23:12:01 55f2b468da67 Downloading [======> ] 32.44MB/257.9MB 23:12:01 e0a9246a993d Extracting [================================================> ] 69.63MB/71.91MB 23:12:01 c124ba1a8b26 Pull complete 23:12:01 f243361b999b Pull complete 23:12:01 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 23:12:01 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 23:12:01 4ba79830ebce Extracting [=====================================> ] 124.2MB/166.8MB 23:12:01 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 23:12:01 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 23:12:01 82bfc142787e Downloading [================================> ] 5.602MB/8.613MB 23:12:01 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 23:12:01 b0e0ef7895f4 Downloading [===> ] 2.26MB/37.01MB 23:12:01 c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB 23:12:01 55f2b468da67 Downloading [=========> ] 46.5MB/257.9MB 23:12:01 eb7cda286a15 Pull complete 23:12:01 e0a9246a993d Extracting [=================================================> ] 71.3MB/71.91MB 23:12:01 api Pulled 23:12:01 82bfc142787e Verifying Checksum 23:12:01 82bfc142787e Download complete 23:12:01 7abf0dc59d35 Pull complete 23:12:01 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 23:12:01 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 23:12:01 4ba79830ebce Extracting [=====================================> ] 125.9MB/166.8MB 23:12:01 6394804c2196 Pull complete 23:12:01 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 23:12:01 c0c90eeb8aca Verifying Checksum 23:12:01 c0c90eeb8aca Download complete 23:12:01 b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 23:12:01 c49e0ee60bfb Extracting [=====> ] 12.81MB/107.3MB 23:12:01 e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB 23:12:01 pap Pulled 23:12:01 55f2b468da67 Downloading [==========> ] 54.61MB/257.9MB 23:12:01 1e017ebebdbd Extracting [===> ] 2.753MB/37.19MB 23:12:01 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 23:12:01 5cfb27c10ea5 Verifying Checksum 23:12:01 5cfb27c10ea5 Download complete 23:12:01 40a5eed61bb0 Downloading [==================================================>] 98B/98B 23:12:01 40a5eed61bb0 Verifying Checksum 23:12:01 40a5eed61bb0 Download complete 23:12:01 e040ea11fa10 Downloading [==================================================>] 173B/173B 23:12:01 e040ea11fa10 Verifying Checksum 23:12:01 e040ea11fa10 Download complete 23:12:01 4ba79830ebce Extracting [======================================> ] 127.6MB/166.8MB 23:12:01 55f2b468da67 Downloading [=============> ] 70.29MB/257.9MB 23:12:01 b0e0ef7895f4 Downloading [=========> ] 6.782MB/37.01MB 23:12:01 1e017ebebdbd Extracting [=======> ] 5.898MB/37.19MB 23:12:01 c49e0ee60bfb Extracting [=======> ] 15.04MB/107.3MB 23:12:01 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 23:12:01 e0a9246a993d Pull complete 23:12:01 5179ab305f38 Extracting [==================================================>] 306B/306B 23:12:01 5179ab305f38 Extracting [==================================================>] 306B/306B 23:12:01 991de477d40a Pull complete 23:12:01 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 23:12:01 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 23:12:01 4ba79830ebce Extracting [======================================> ] 129.8MB/166.8MB 23:12:01 55f2b468da67 Downloading [=================> ] 89.21MB/257.9MB 23:12:01 b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 23:12:01 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 23:12:01 1e017ebebdbd Extracting [===========> ] 8.651MB/37.19MB 23:12:01 5179ab305f38 Pull complete 23:12:01 c49e0ee60bfb Extracting [========> ] 17.27MB/107.3MB 23:12:01 4ba79830ebce Extracting [=======================================> ] 133.1MB/166.8MB 23:12:01 55f2b468da67 Downloading [====================> ] 108.1MB/257.9MB 23:12:01 b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB 23:12:01 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 23:12:01 18ce86a3284e Extracting [> ] 557.1kB/182.3MB 23:12:01 1e017ebebdbd Extracting [==============> ] 11.01MB/37.19MB 23:12:01 5efc16ba9cdc Pull complete 23:12:01 policy-db-migrator Pulled 23:12:01 c49e0ee60bfb Extracting [========> ] 18.38MB/107.3MB 23:12:01 4ba79830ebce Extracting [========================================> ] 136.5MB/166.8MB 23:12:01 55f2b468da67 Downloading [=======================> ] 121.7MB/257.9MB 23:12:01 b0e0ef7895f4 Downloading [======================> ] 16.96MB/37.01MB 23:12:01 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 23:12:01 18ce86a3284e Extracting [==> ] 10.03MB/182.3MB 23:12:01 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB 23:12:01 c49e0ee60bfb Extracting [===========> ] 25.07MB/107.3MB 23:12:01 55f2b468da67 Downloading [==========================> ] 137.3MB/257.9MB 23:12:01 b0e0ef7895f4 Downloading [====================================> ] 26.75MB/37.01MB 23:12:01 4ba79830ebce Extracting [=========================================> ] 139.8MB/166.8MB 23:12:01 09d5a3f70313 Downloading [========> ] 18.38MB/109.2MB 23:12:01 18ce86a3284e Extracting [=====> ] 18.94MB/182.3MB 23:12:01 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB 23:12:01 b0e0ef7895f4 Verifying Checksum 23:12:01 b0e0ef7895f4 Download complete 23:12:01 c49e0ee60bfb Extracting [===============> ] 33.42MB/107.3MB 23:12:01 55f2b468da67 Downloading [=============================> ] 149.8MB/257.9MB 23:12:01 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 23:12:01 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 23:12:01 356f5c2c843b Verifying Checksum 23:12:01 356f5c2c843b Download complete 23:12:01 4ba79830ebce Extracting [===========================================> ] 144.3MB/166.8MB 23:12:01 09d5a3f70313 Downloading [==============> ] 31.9MB/109.2MB 23:12:01 18ce86a3284e Extracting [=======> ] 27.85MB/182.3MB 23:12:01 1e017ebebdbd Extracting [============================> ] 21.23MB/37.19MB 23:12:01 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 23:12:01 c49e0ee60bfb Extracting [=================> ] 37.88MB/107.3MB 23:12:01 55f2b468da67 Downloading [================================> ] 165.4MB/257.9MB 23:12:01 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 23:12:02 4ba79830ebce Extracting [============================================> ] 147.6MB/166.8MB 23:12:02 18ce86a3284e Extracting [=========> ] 32.87MB/182.3MB 23:12:02 2d429b9e73a6 Downloading [====> ] 2.358MB/29.13MB 23:12:02 1e017ebebdbd Extracting [==================================> ] 25.95MB/37.19MB 23:12:02 55f2b468da67 Downloading [===================================> ] 180.6MB/257.9MB 23:12:02 c49e0ee60bfb Extracting [===================> ] 41.22MB/107.3MB 23:12:02 09d5a3f70313 Downloading [==========================> ] 58.39MB/109.2MB 23:12:02 4ba79830ebce Extracting [=============================================> ] 151.5MB/166.8MB 23:12:02 18ce86a3284e Extracting [===========> ] 40.67MB/182.3MB 23:12:02 2d429b9e73a6 Downloading [==================> ] 10.91MB/29.13MB 23:12:02 1e017ebebdbd Extracting [========================================> ] 29.88MB/37.19MB 23:12:02 55f2b468da67 Downloading [=====================================> ] 192.5MB/257.9MB 23:12:02 c49e0ee60bfb Extracting [====================> ] 44.56MB/107.3MB 23:12:02 09d5a3f70313 Downloading [=================================> ] 73.53MB/109.2MB 23:12:02 18ce86a3284e Extracting [==============> ] 52.36MB/182.3MB 23:12:02 2d429b9e73a6 Downloading [=======================================> ] 23.3MB/29.13MB 23:12:02 4ba79830ebce Extracting [==============================================> ] 156MB/166.8MB 23:12:02 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 23:12:02 2d429b9e73a6 Verifying Checksum 23:12:02 2d429b9e73a6 Download complete 23:12:02 55f2b468da67 Downloading [========================================> ] 208.7MB/257.9MB 23:12:02 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 23:12:02 46eab5b44a35 Verifying Checksum 23:12:02 46eab5b44a35 Download complete 23:12:02 c49e0ee60bfb Extracting [======================> ] 47.35MB/107.3MB 23:12:02 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB 23:12:02 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 23:12:02 18ce86a3284e Extracting [================> ] 60.16MB/182.3MB 23:12:02 4ba79830ebce Extracting [===============================================> ] 159.3MB/166.8MB 23:12:02 1e017ebebdbd Extracting [===============================================> ] 35.39MB/37.19MB 23:12:02 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB 23:12:02 c4d302cc468d Verifying Checksum 23:12:02 c4d302cc468d Download complete 23:12:02 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 23:12:02 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 23:12:02 09d5a3f70313 Downloading [============================================> ] 97.86MB/109.2MB 23:12:02 c49e0ee60bfb Extracting [=======================> ] 50.14MB/107.3MB 23:12:02 18ce86a3284e Extracting [==================> ] 67.4MB/182.3MB 23:12:02 01e0882c90d9 Verifying Checksum 23:12:02 01e0882c90d9 Download complete 23:12:02 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 23:12:02 4ba79830ebce Extracting [================================================> ] 161.5MB/166.8MB 23:12:02 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 23:12:02 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB 23:12:02 55f2b468da67 Downloading [============================================> ] 231.4MB/257.9MB 23:12:02 09d5a3f70313 Verifying Checksum 23:12:02 09d5a3f70313 Download complete 23:12:02 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 23:12:02 c49e0ee60bfb Extracting [========================> ] 52.92MB/107.3MB 23:12:02 18ce86a3284e Extracting [====================> ] 75.2MB/182.3MB 23:12:02 ed54a7dee1d8 Verifying Checksum 23:12:02 ed54a7dee1d8 Download complete 23:12:02 12c5c803443f Downloading [==================================================>] 116B/116B 23:12:02 12c5c803443f Verifying Checksum 23:12:02 12c5c803443f Download complete 23:12:02 531ee2cf3c0c Downloading [===============================> ] 5.16MB/8.066MB 23:12:02 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 23:12:02 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 23:12:02 e27c75a98748 Verifying Checksum 23:12:02 e27c75a98748 Download complete 23:12:02 1e017ebebdbd Pull complete 23:12:02 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 23:12:02 531ee2cf3c0c Verifying Checksum 23:12:02 531ee2cf3c0c Download complete 23:12:02 55f2b468da67 Downloading [==============================================> ] 241.1MB/257.9MB 23:12:02 2d429b9e73a6 Extracting [=========> ] 5.603MB/29.13MB 23:12:02 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 23:12:02 a83b68436f09 Download complete 23:12:02 4ba79830ebce Extracting [=================================================> ] 164.3MB/166.8MB 23:12:02 787d6bee9571 Downloading [==================================================>] 127B/127B 23:12:02 787d6bee9571 Verifying Checksum 23:12:02 787d6bee9571 Download complete 23:12:02 13ff0988aaea Downloading [==================================================>] 167B/167B 23:12:02 13ff0988aaea Verifying Checksum 23:12:02 13ff0988aaea Download complete 23:12:02 c49e0ee60bfb Extracting [==========================> ] 56.26MB/107.3MB 23:12:02 18ce86a3284e Extracting [======================> ] 83.56MB/182.3MB 23:12:02 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 23:12:02 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 23:12:02 4b82842ab819 Verifying Checksum 23:12:02 4b82842ab819 Download complete 23:12:02 7e568a0dc8fb Downloading [==================================================>] 184B/184B 23:12:02 7e568a0dc8fb Verifying Checksum 23:12:02 7e568a0dc8fb Download complete 23:12:02 eca0188f477e Downloading [> ] 375.7kB/37.17MB 23:12:02 e73cb4a42719 Downloading [====> ] 10.81MB/109.1MB 23:12:02 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 23:12:02 55f2b468da67 Downloading [=================================================> ] 253.6MB/257.9MB 23:12:02 55f2b468da67 Verifying Checksum 23:12:02 55f2b468da67 Download complete 23:12:02 c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB 23:12:02 18ce86a3284e Extracting [==========================> ] 95.26MB/182.3MB 23:12:02 4ba79830ebce Extracting [=================================================> ] 166.6MB/166.8MB 23:12:02 e444bcd4d577 Downloading [==================================================>] 279B/279B 23:12:02 e444bcd4d577 Verifying Checksum 23:12:02 e444bcd4d577 Download complete 23:12:02 eca0188f477e Downloading [====> ] 3.014MB/37.17MB 23:12:02 eabd8714fec9 Downloading [> ] 539.6kB/375MB 23:12:02 e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB 23:12:02 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 23:12:02 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 23:12:02 18ce86a3284e Extracting [=============================> ] 105.8MB/182.3MB 23:12:02 c49e0ee60bfb Extracting [=============================> ] 62.39MB/107.3MB 23:12:02 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 23:12:02 eca0188f477e Downloading [===========> ] 8.289MB/37.17MB 23:12:02 e73cb4a42719 Downloading [==============> ] 31.36MB/109.1MB 23:12:02 eabd8714fec9 Downloading [> ] 3.784MB/375MB 23:12:02 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 23:12:02 18ce86a3284e Extracting [==============================> ] 110.9MB/182.3MB 23:12:02 c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB 23:12:02 55f2b468da67 Extracting [=> ] 6.128MB/257.9MB 23:12:02 4ba79830ebce Pull complete 23:12:02 eca0188f477e Downloading [=================> ] 13.19MB/37.17MB 23:12:02 e73cb4a42719 Downloading [===================> ] 41.63MB/109.1MB 23:12:02 d223479d7367 Extracting [> ] 98.3kB/6.742MB 23:12:03 eabd8714fec9 Downloading [=> ] 7.568MB/375MB 23:12:03 2d429b9e73a6 Extracting [=============================> ] 17.1MB/29.13MB 23:12:03 18ce86a3284e Extracting [===============================> ] 115.9MB/182.3MB 23:12:03 c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB 23:12:03 55f2b468da67 Extracting [===> ] 16.15MB/257.9MB 23:12:03 eca0188f477e Downloading [==========================> ] 19.97MB/37.17MB 23:12:03 e73cb4a42719 Downloading [=========================> ] 54.61MB/109.1MB 23:12:03 2d429b9e73a6 Extracting [===================================> ] 20.94MB/29.13MB 23:12:03 eabd8714fec9 Downloading [==> ] 15.68MB/375MB 23:12:03 d223479d7367 Extracting [==> ] 294.9kB/6.742MB 23:12:03 18ce86a3284e Extracting [=================================> ] 122.6MB/182.3MB 23:12:03 c49e0ee60bfb Extracting [================================> ] 69.63MB/107.3MB 23:12:03 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB 23:12:03 e73cb4a42719 Downloading [==============================> ] 67.04MB/109.1MB 23:12:03 eca0188f477e Downloading [=========================================> ] 30.52MB/37.17MB 23:12:03 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB 23:12:03 eabd8714fec9 Downloading [===> ] 25.95MB/375MB 23:12:03 d223479d7367 Extracting [==========> ] 1.376MB/6.742MB 23:12:03 eca0188f477e Verifying Checksum 23:12:03 eca0188f477e Download complete 23:12:03 c49e0ee60bfb Extracting [==================================> ] 72.97MB/107.3MB 23:12:03 18ce86a3284e Extracting [==================================> ] 127.6MB/182.3MB 23:12:03 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 23:12:03 45fd2fec8a19 Verifying Checksum 23:12:03 45fd2fec8a19 Download complete 23:12:03 e73cb4a42719 Downloading [====================================> ] 78.94MB/109.1MB 23:12:03 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 23:12:03 eabd8714fec9 Downloading [=====> ] 38.39MB/375MB 23:12:03 d223479d7367 Extracting [================> ] 2.163MB/6.742MB 23:12:03 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB 23:12:03 c49e0ee60bfb Extracting [===================================> ] 76.32MB/107.3MB 23:12:03 18ce86a3284e Extracting [===================================> ] 129.2MB/182.3MB 23:12:03 e73cb4a42719 Downloading [============================================> ] 96.24MB/109.1MB 23:12:03 eca0188f477e Extracting [> ] 393.2kB/37.17MB 23:12:03 8f10199ed94b Downloading [============> ] 2.162MB/8.768MB 23:12:03 eabd8714fec9 Downloading [======> ] 51.9MB/375MB 23:12:03 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 23:12:03 d223479d7367 Extracting [========================> ] 3.342MB/6.742MB 23:12:03 c49e0ee60bfb Extracting [====================================> ] 77.43MB/107.3MB 23:12:03 18ce86a3284e Extracting [=====================================> ] 137MB/182.3MB 23:12:03 e73cb4a42719 Downloading [=================================================> ] 108.1MB/109.1MB 23:12:03 eca0188f477e Extracting [==> ] 1.966MB/37.17MB 23:12:03 8f10199ed94b Downloading [=========================> ] 4.423MB/8.768MB 23:12:03 eabd8714fec9 Downloading [========> ] 65.42MB/375MB 23:12:03 e73cb4a42719 Verifying Checksum 23:12:03 e73cb4a42719 Download complete 23:12:03 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 23:12:03 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 23:12:03 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 23:12:03 f963a77d2726 Verifying Checksum 23:12:03 f963a77d2726 Download complete 23:12:03 d223479d7367 Extracting [===============================> ] 4.227MB/6.742MB 23:12:03 2d429b9e73a6 Extracting [==============================================> ] 26.84MB/29.13MB 23:12:03 18ce86a3284e Extracting [=======================================> ] 144.3MB/182.3MB 23:12:03 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 23:12:03 8f10199ed94b Downloading [=======================================> ] 6.978MB/8.768MB 23:12:03 c49e0ee60bfb Extracting [====================================> ] 79.1MB/107.3MB 23:12:03 eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 23:12:03 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB 23:12:03 eabd8714fec9 Downloading [=========> ] 74.61MB/375MB 23:12:03 d223479d7367 Extracting [=======================================> ] 5.308MB/6.742MB 23:12:03 8f10199ed94b Verifying Checksum 23:12:03 8f10199ed94b Download complete 23:12:03 18ce86a3284e Extracting [=========================================> ] 152.1MB/182.3MB 23:12:03 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 23:12:03 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 23:12:03 79161a3f5362 Download complete 23:12:03 c49e0ee60bfb Extracting [=====================================> ] 80.77MB/107.3MB 23:12:03 eabd8714fec9 Downloading [===========> ] 83.8MB/375MB 23:12:03 55f2b468da67 Extracting [=======> ] 40.11MB/257.9MB 23:12:03 f3a82e9f1761 Downloading [===> ] 3.21MB/44.41MB 23:12:03 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 23:12:03 9c266ba63f51 Verifying Checksum 23:12:03 9c266ba63f51 Download complete 23:12:03 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB 23:12:03 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 23:12:03 2e8a7df9c2ee Verifying Checksum 23:12:03 2e8a7df9c2ee Download complete 23:12:03 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 23:12:03 d223479d7367 Extracting [==========================================> ] 5.702MB/6.742MB 23:12:03 10f05dd8b1db Downloading [==================================================>] 98B/98B 23:12:03 10f05dd8b1db Verifying Checksum 23:12:03 10f05dd8b1db Download complete 23:12:03 18ce86a3284e Extracting [===========================================> ] 159.3MB/182.3MB 23:12:03 41dac8b43ba6 Downloading [==================================================>] 171B/171B 23:12:03 41dac8b43ba6 Verifying Checksum 23:12:03 41dac8b43ba6 Download complete 23:12:03 c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB 23:12:03 eabd8714fec9 Downloading [============> ] 96.78MB/375MB 23:12:03 f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB 23:12:03 55f2b468da67 Extracting [=========> ] 46.79MB/257.9MB 23:12:03 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 23:12:03 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 23:12:03 71a9f6a9ab4d Verifying Checksum 23:12:03 71a9f6a9ab4d Download complete 23:12:03 eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 23:12:03 18ce86a3284e Extracting [=============================================> ] 166.6MB/182.3MB 23:12:03 d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 23:12:03 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 23:12:03 eabd8714fec9 Downloading [=============> ] 104.9MB/375MB 23:12:03 c49e0ee60bfb Extracting [========================================> ] 86.9MB/107.3MB 23:12:03 55f2b468da67 Extracting [==========> ] 53.48MB/257.9MB 23:12:03 f3a82e9f1761 Downloading [==========> ] 9.174MB/44.41MB 23:12:03 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 23:12:03 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 23:12:03 eca0188f477e Extracting [===================> ] 14.16MB/37.17MB 23:12:04 18ce86a3284e Extracting [===============================================> ] 172.7MB/182.3MB 23:12:04 eabd8714fec9 Downloading [===============> ] 119.5MB/375MB 23:12:04 55f2b468da67 Extracting [===========> ] 61.28MB/257.9MB 23:12:04 c49e0ee60bfb Extracting [==========================================> ] 90.8MB/107.3MB 23:12:04 f3a82e9f1761 Downloading [==============> ] 12.84MB/44.41MB 23:12:04 18ce86a3284e Extracting [==================================================>] 182.3MB/182.3MB 23:12:04 da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 23:12:04 eca0188f477e Extracting [=======================> ] 17.3MB/37.17MB 23:12:04 eabd8714fec9 Downloading [=================> ] 131.9MB/375MB 23:12:04 55f2b468da67 Extracting [=============> ] 70.75MB/257.9MB 23:12:04 f3a82e9f1761 Downloading [==================> ] 16.06MB/44.41MB 23:12:04 c49e0ee60bfb Extracting [=============================================> ] 98.6MB/107.3MB 23:12:04 da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 23:12:04 eca0188f477e Extracting [============================> ] 21.23MB/37.17MB 23:12:04 eabd8714fec9 Downloading [==================> ] 139MB/375MB 23:12:04 da3ed5db7103 Downloading [=> ] 4.324MB/127.4MB 23:12:04 55f2b468da67 Extracting [===============> ] 77.99MB/257.9MB 23:12:04 f3a82e9f1761 Downloading [====================> ] 17.89MB/44.41MB 23:12:04 c49e0ee60bfb Extracting [===============================================> ] 100.8MB/107.3MB 23:12:04 eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB 23:12:04 eabd8714fec9 Downloading [====================> ] 153.5MB/375MB 23:12:04 da3ed5db7103 Downloading [======> ] 16.76MB/127.4MB 23:12:04 55f2b468da67 Extracting [=================> ] 88.57MB/257.9MB 23:12:04 f3a82e9f1761 Downloading [=================================> ] 29.82MB/44.41MB 23:12:04 c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB 23:12:04 eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB 23:12:04 eabd8714fec9 Downloading [======================> ] 167.6MB/375MB 23:12:04 da3ed5db7103 Downloading [============> ] 32.98MB/127.4MB 23:12:04 55f2b468da67 Extracting [===================> ] 100.3MB/257.9MB 23:12:04 f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB 23:12:04 f3a82e9f1761 Verifying Checksum 23:12:04 f3a82e9f1761 Download complete 23:12:04 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 23:12:04 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 23:12:04 c955f6e31a04 Verifying Checksum 23:12:04 c955f6e31a04 Download complete 23:12:04 9fa9226be034 Downloading [> ] 15.3kB/783kB 23:12:04 eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB 23:12:04 eabd8714fec9 Downloading [========================> ] 181.7MB/375MB 23:12:04 da3ed5db7103 Downloading [==================> ] 46.5MB/127.4MB 23:12:04 c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 23:12:04 9fa9226be034 Downloading [==================================================>] 783kB/783kB 23:12:04 9fa9226be034 Verifying Checksum 23:12:04 9fa9226be034 Download complete 23:12:04 9fa9226be034 Extracting [==> ] 32.77kB/783kB 23:12:04 55f2b468da67 Extracting [====================> ] 106.4MB/257.9MB 23:12:04 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 23:12:04 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 23:12:04 1617e25568b2 Verifying Checksum 23:12:04 1617e25568b2 Download complete 23:12:04 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 23:12:04 eabd8714fec9 Downloading [==========================> ] 196.8MB/375MB 23:12:04 da3ed5db7103 Downloading [========================> ] 61.64MB/127.4MB 23:12:04 eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 23:12:04 9fa9226be034 Extracting [==================================================>] 783kB/783kB 23:12:04 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 23:12:04 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB 23:12:04 6ac0e4adf315 Downloading [======> ] 8.109MB/62.07MB 23:12:04 eabd8714fec9 Downloading [============================> ] 211.4MB/375MB 23:12:04 da3ed5db7103 Downloading [=============================> ] 75.15MB/127.4MB 23:12:04 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 23:12:04 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 23:12:04 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 23:12:04 6ac0e4adf315 Downloading [==============> ] 18.38MB/62.07MB 23:12:04 eabd8714fec9 Downloading [==============================> ] 229.2MB/375MB 23:12:04 da3ed5db7103 Downloading [====================================> ] 92.99MB/127.4MB 23:12:05 55f2b468da67 Extracting [=======================> ] 118.7MB/257.9MB 23:12:05 da3ed5db7103 Downloading [==========================================> ] 109.2MB/127.4MB 23:12:05 eabd8714fec9 Downloading [================================> ] 245.5MB/375MB 23:12:05 6ac0e4adf315 Downloading [=========================> ] 31.36MB/62.07MB 23:12:05 55f2b468da67 Extracting [=======================> ] 119.8MB/257.9MB 23:12:05 eabd8714fec9 Downloading [==================================> ] 262.2MB/375MB 23:12:05 da3ed5db7103 Downloading [=================================================> ] 124.9MB/127.4MB 23:12:05 6ac0e4adf315 Downloading [=======================================> ] 48.66MB/62.07MB 23:12:05 55f2b468da67 Extracting [========================> ] 125.3MB/257.9MB 23:12:05 da3ed5db7103 Verifying Checksum 23:12:05 da3ed5db7103 Download complete 23:12:05 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 23:12:05 6ac0e4adf315 Verifying Checksum 23:12:05 6ac0e4adf315 Download complete 23:12:05 eabd8714fec9 Downloading [=====================================> ] 278.4MB/375MB 23:12:05 408012a7b118 Downloading [==================================================>] 637B/637B 23:12:05 408012a7b118 Verifying Checksum 23:12:05 408012a7b118 Download complete 23:12:05 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 23:12:05 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 23:12:05 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 23:12:05 44986281b8b9 Download complete 23:12:05 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 23:12:05 bf70c5107ab5 Verifying Checksum 23:12:05 bf70c5107ab5 Download complete 23:12:05 f3b09c502777 Downloading [========> ] 9.19MB/56.52MB 23:12:05 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 23:12:05 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 23:12:05 1ccde423731d Download complete 23:12:05 7221d93db8a9 Download complete 23:12:05 7df673c7455d Downloading [==================================================>] 694B/694B 23:12:05 7df673c7455d Verifying Checksum 23:12:05 7df673c7455d Download complete 23:12:05 eabd8714fec9 Downloading [=======================================> ] 294.1MB/375MB 23:12:05 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB 23:12:05 f3b09c502777 Downloading [=====================> ] 24.33MB/56.52MB 23:12:05 eabd8714fec9 Downloading [=========================================> ] 307.6MB/375MB 23:12:05 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 23:12:05 f3b09c502777 Downloading [=================================> ] 37.85MB/56.52MB 23:12:05 eabd8714fec9 Downloading [===========================================> ] 323.3MB/375MB 23:12:05 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB 23:12:05 f3b09c502777 Downloading [===============================================> ] 53.53MB/56.52MB 23:12:05 f3b09c502777 Verifying Checksum 23:12:05 f3b09c502777 Download complete 23:12:05 2d429b9e73a6 Pull complete 23:12:05 eabd8714fec9 Downloading [=============================================> ] 338.5MB/375MB 23:12:05 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB 23:12:05 eabd8714fec9 Downloading [===============================================> ] 355.8MB/375MB 23:12:05 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB 23:12:05 eabd8714fec9 Downloading [=================================================> ] 368.2MB/375MB 23:12:05 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB 23:12:05 d223479d7367 Pull complete 23:12:05 18ce86a3284e Pull complete 23:12:05 eabd8714fec9 Verifying Checksum 23:12:05 eabd8714fec9 Download complete 23:12:06 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB 23:12:06 9fa9226be034 Pull complete 23:12:06 eca0188f477e Pull complete 23:12:06 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 23:12:06 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 23:12:06 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 23:12:06 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 23:12:06 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 23:12:06 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 23:12:06 c49e0ee60bfb Pull complete 23:12:06 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 23:12:06 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB 23:12:06 ece604b40811 Extracting [==================================================>] 303B/303B 23:12:06 ece604b40811 Extracting [==================================================>] 303B/303B 23:12:06 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB 23:12:06 55f2b468da67 Extracting [====================================> ] 188.8MB/257.9MB 23:12:07 55f2b468da67 Extracting [=====================================> ] 192.7MB/257.9MB 23:12:07 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 23:12:07 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 23:12:07 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB 23:12:07 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 23:12:07 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 23:12:07 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 23:12:07 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 23:12:08 55f2b468da67 Extracting [========================================> ] 208.9MB/257.9MB 23:12:08 46eab5b44a35 Pull complete 23:12:08 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 23:12:08 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 23:12:08 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 23:12:08 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 23:12:08 55f2b468da67 Extracting [===========================================> ] 225.1MB/257.9MB 23:12:08 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 23:12:09 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 23:12:09 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 23:12:09 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB 23:12:09 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB 23:12:09 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 23:12:09 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 23:12:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 23:12:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 23:12:09 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 23:12:09 098efa8b34b7 Extracting [==================================================>] 1.154kB/1.154kB 23:12:09 ece604b40811 Pull complete 23:12:09 e444bcd4d577 Extracting [==================================================>] 279B/279B 23:12:09 e444bcd4d577 Extracting [==================================================>] 279B/279B 23:12:09 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 23:12:09 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 23:12:10 1617e25568b2 Extracting [========================================> ] 393.2kB/480.9kB 23:12:10 098efa8b34b7 Pull complete 23:12:10 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 23:12:10 614e034e242f Extracting [==================================================>] 1.126kB/1.126kB 23:12:10 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 23:12:10 e444bcd4d577 Pull complete 23:12:10 55f2b468da67 Pull complete 23:12:10 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 23:12:10 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 23:12:10 384497dbce3b Extracting [> ] 557.1kB/63.48MB 23:12:10 82bfc142787e Extracting [> ] 98.3kB/8.613MB 23:12:10 c4d302cc468d Extracting [=======================================> ] 3.604MB/4.534MB 23:12:10 614e034e242f Pull complete 23:12:10 c01e672f2391 Extracting [> ] 557.1kB/263.6MB 23:12:10 simulator Pulled 23:12:10 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 23:12:10 eabd8714fec9 Extracting [> ] 557.1kB/375MB 23:12:10 82bfc142787e Extracting [=======> ] 1.376MB/8.613MB 23:12:10 1617e25568b2 Pull complete 23:12:10 384497dbce3b Extracting [> ] 1.114MB/63.48MB 23:12:10 c4d302cc468d Pull complete 23:12:10 eabd8714fec9 Extracting [=> ] 9.47MB/375MB 23:12:10 c01e672f2391 Extracting [> ] 1.114MB/263.6MB 23:12:10 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 23:12:10 82bfc142787e Extracting [=======================================> ] 6.783MB/8.613MB 23:12:10 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 23:12:10 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 23:12:10 eabd8714fec9 Extracting [==> ] 17.27MB/375MB 23:12:10 c01e672f2391 Extracting [=> ] 7.799MB/263.6MB 23:12:10 82bfc142787e Pull complete 23:12:10 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 23:12:10 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 23:12:10 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 23:12:10 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 23:12:10 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 23:12:10 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB 23:12:10 01e0882c90d9 Pull complete 23:12:10 c01e672f2391 Extracting [===> ] 17.27MB/263.6MB 23:12:10 eabd8714fec9 Extracting [==> ] 21.17MB/375MB 23:12:10 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 23:12:10 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 23:12:10 46baca71a4ef Pull complete 23:12:10 c01e672f2391 Extracting [====> ] 26.18MB/263.6MB 23:12:10 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 23:12:10 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 23:12:10 384497dbce3b Extracting [==> ] 3.342MB/63.48MB 23:12:10 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 23:12:10 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 23:12:10 c01e672f2391 Extracting [=====> ] 31.2MB/263.6MB 23:12:10 531ee2cf3c0c Extracting [===================> ] 3.146MB/8.066MB 23:12:10 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB 23:12:10 eabd8714fec9 Extracting [====> ] 30.08MB/375MB 23:12:10 b0e0ef7895f4 Extracting [=====> ] 3.932MB/37.01MB 23:12:10 531ee2cf3c0c Extracting [====================> ] 3.342MB/8.066MB 23:12:10 c01e672f2391 Extracting [======> ] 32.31MB/263.6MB 23:12:10 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 23:12:10 eabd8714fec9 Extracting [======> ] 45.68MB/375MB 23:12:11 531ee2cf3c0c Extracting [===============================> ] 5.014MB/8.066MB 23:12:11 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 23:12:11 b0e0ef7895f4 Extracting [==============> ] 10.62MB/37.01MB 23:12:11 c01e672f2391 Extracting [=======> ] 40.11MB/263.6MB 23:12:11 eabd8714fec9 Extracting [=======> ] 52.92MB/375MB 23:12:11 6ac0e4adf315 Extracting [============> ] 15.04MB/62.07MB 23:12:11 531ee2cf3c0c Extracting [=======================================> ] 6.39MB/8.066MB 23:12:11 b0e0ef7895f4 Extracting [=====================> ] 16.12MB/37.01MB 23:12:11 c01e672f2391 Extracting [========> ] 44.01MB/263.6MB 23:12:11 eabd8714fec9 Extracting [========> ] 60.16MB/375MB 23:12:11 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 23:12:11 6ac0e4adf315 Extracting [=============> ] 17.27MB/62.07MB 23:12:11 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 23:12:11 c01e672f2391 Extracting [=========> ] 50.69MB/263.6MB 23:12:11 b0e0ef7895f4 Extracting [===================================> ] 25.95MB/37.01MB 23:12:11 eabd8714fec9 Extracting [========> ] 65.73MB/375MB 23:12:11 6ac0e4adf315 Extracting [=================> ] 22.28MB/62.07MB 23:12:11 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 23:12:11 c01e672f2391 Extracting [===========> ] 59.05MB/263.6MB 23:12:11 b0e0ef7895f4 Extracting [===============================================> ] 35.39MB/37.01MB 23:12:11 eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 23:12:11 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 23:12:11 eabd8714fec9 Extracting [==========> ] 77.43MB/375MB 23:12:11 c01e672f2391 Extracting [============> ] 63.5MB/263.6MB 23:12:11 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 23:12:11 384497dbce3b Extracting [=======> ] 8.913MB/63.48MB 23:12:11 eabd8714fec9 Extracting [===========> ] 89.13MB/375MB 23:12:11 c01e672f2391 Extracting [=============> ] 70.19MB/263.6MB 23:12:11 6ac0e4adf315 Extracting [=====================> ] 27.3MB/62.07MB 23:12:11 384497dbce3b Extracting [========> ] 10.58MB/63.48MB 23:12:11 eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 23:12:11 c01e672f2391 Extracting [==============> ] 77.43MB/263.6MB 23:12:11 6ac0e4adf315 Extracting [=========================> ] 31.2MB/62.07MB 23:12:11 384497dbce3b Extracting [==========> ] 12.81MB/63.48MB 23:12:12 c01e672f2391 Extracting [================> ] 86.34MB/263.6MB 23:12:12 eabd8714fec9 Extracting [==============> ] 107.5MB/375MB 23:12:12 6ac0e4adf315 Extracting [===============================> ] 39.55MB/62.07MB 23:12:12 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 23:12:12 c01e672f2391 Extracting [==================> ] 96.37MB/263.6MB 23:12:12 eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 23:12:12 6ac0e4adf315 Extracting [==========================================> ] 52.92MB/62.07MB 23:12:12 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 23:12:12 c01e672f2391 Extracting [====================> ] 107MB/263.6MB 23:12:12 eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 23:12:12 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 23:12:12 384497dbce3b Extracting [================> ] 21.17MB/63.48MB 23:12:12 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 23:12:12 c01e672f2391 Extracting [=====================> ] 114.2MB/263.6MB 23:12:12 eabd8714fec9 Extracting [================> ] 120.3MB/375MB 23:12:12 384497dbce3b Extracting [=================> ] 22.28MB/63.48MB 23:12:12 c01e672f2391 Extracting [======================> ] 118.7MB/263.6MB 23:12:12 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 23:12:12 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB 23:12:12 c01e672f2391 Extracting [========================> ] 129.8MB/263.6MB 23:12:12 eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 23:12:12 c01e672f2391 Extracting [===========================> ] 143.2MB/263.6MB 23:12:12 eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 23:12:12 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB 23:12:12 c01e672f2391 Extracting [=============================> ] 154.9MB/263.6MB 23:12:12 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 23:12:12 384497dbce3b Extracting [========================> ] 31.2MB/63.48MB 23:12:12 c01e672f2391 Extracting [===============================> ] 166.6MB/263.6MB 23:12:12 eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 23:12:12 384497dbce3b Extracting [==========================> ] 33.42MB/63.48MB 23:12:13 c01e672f2391 Extracting [=================================> ] 178.3MB/263.6MB 23:12:13 eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 23:12:13 c01e672f2391 Extracting [==================================> ] 183.8MB/263.6MB 23:12:13 384497dbce3b Extracting [============================> ] 35.65MB/63.48MB 23:12:13 eabd8714fec9 Extracting [===================> ] 147.1MB/375MB 23:12:13 c01e672f2391 Extracting [====================================> ] 193.9MB/263.6MB 23:12:13 eabd8714fec9 Extracting [====================> ] 150.4MB/375MB 23:12:13 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB 23:12:13 c01e672f2391 Extracting [======================================> ] 203.9MB/263.6MB 23:12:13 eabd8714fec9 Extracting [====================> ] 155.4MB/375MB 23:12:13 384497dbce3b Extracting [================================> ] 41.78MB/63.48MB 23:12:13 c01e672f2391 Extracting [========================================> ] 213.4MB/263.6MB 23:12:13 eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 23:12:13 384497dbce3b Extracting [===================================> ] 44.56MB/63.48MB 23:12:13 c01e672f2391 Extracting [==========================================> ] 226.2MB/263.6MB 23:12:13 eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 23:12:13 531ee2cf3c0c Pull complete 23:12:13 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB 23:12:13 c01e672f2391 Extracting [============================================> ] 232.8MB/263.6MB 23:12:13 eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 23:12:13 384497dbce3b Extracting [=======================================> ] 50.14MB/63.48MB 23:12:13 eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB 23:12:13 eabd8714fec9 Extracting [=========================> ] 191.6MB/375MB 23:12:13 c01e672f2391 Extracting [==============================================> ] 242.9MB/263.6MB 23:12:14 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB 23:12:14 eabd8714fec9 Extracting [==========================> ] 202.2MB/375MB 23:12:14 c01e672f2391 Extracting [===============================================> ] 249MB/263.6MB 23:12:14 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB 23:12:14 eabd8714fec9 Extracting [============================> ] 210.6MB/375MB 23:12:14 c01e672f2391 Extracting [=================================================> ] 261.8MB/263.6MB 23:12:14 b0e0ef7895f4 Pull complete 23:12:14 c01e672f2391 Extracting [==================================================>] 263.6MB/263.6MB 23:12:14 384497dbce3b Extracting [=============================================> ] 57.93MB/63.48MB 23:12:14 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 23:12:14 384497dbce3b Extracting [==============================================> ] 58.49MB/63.48MB 23:12:14 6ac0e4adf315 Pull complete 23:12:14 eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 23:12:14 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 23:12:14 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 23:12:14 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 23:12:14 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 23:12:14 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 23:12:14 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 23:12:14 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 23:12:14 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 23:12:14 eabd8714fec9 Extracting [=============================> ] 222.8MB/375MB 23:12:14 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 23:12:14 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 23:12:14 eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 23:12:14 eabd8714fec9 Extracting [===============================> ] 232.8MB/375MB 23:12:15 eabd8714fec9 Extracting [===============================> ] 239MB/375MB 23:12:15 eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 23:12:15 eabd8714fec9 Extracting [=================================> ] 250.1MB/375MB 23:12:15 eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB 23:12:15 eabd8714fec9 Extracting [==================================> ] 259.6MB/375MB 23:12:15 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 23:12:15 eabd8714fec9 Extracting [===================================> ] 265.2MB/375MB 23:12:15 f3b09c502777 Extracting [====> ] 5.571MB/56.52MB 23:12:15 eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 23:12:15 f3b09c502777 Extracting [========> ] 10.03MB/56.52MB 23:12:15 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 23:12:15 f3b09c502777 Extracting [===========> ] 13.37MB/56.52MB 23:12:15 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 23:12:15 f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB 23:12:16 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 23:12:16 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 23:12:16 ed54a7dee1d8 Pull complete 23:12:16 c01e672f2391 Pull complete 23:12:16 eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 23:12:16 f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB 23:12:16 f3b09c502777 Extracting [============================> ] 31.75MB/56.52MB 23:12:16 eabd8714fec9 Extracting [=====================================> ] 282.4MB/375MB 23:12:16 f3b09c502777 Extracting [=========================================> ] 46.79MB/56.52MB 23:12:16 eabd8714fec9 Extracting [======================================> ] 286.9MB/375MB 23:12:16 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 23:12:16 eabd8714fec9 Extracting [=======================================> ] 293MB/375MB 23:12:16 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 23:12:16 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 23:12:16 eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB 23:12:16 eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 23:12:17 eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 23:12:17 384497dbce3b Pull complete 23:12:17 c0c90eeb8aca Pull complete 23:12:17 eabd8714fec9 Extracting [========================================> ] 303MB/375MB 23:12:17 eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB 23:12:17 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 23:12:18 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 23:12:18 eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 23:12:18 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 23:12:18 eabd8714fec9 Extracting [==========================================> ] 318.1MB/375MB 23:12:18 eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 23:12:18 eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 23:12:18 eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 23:12:18 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 23:12:18 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 23:12:18 eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 23:12:19 12c5c803443f Extracting [==================================================>] 116B/116B 23:12:19 12c5c803443f Extracting [==================================================>] 116B/116B 23:12:19 eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB 23:12:19 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 23:12:19 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 23:12:19 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 23:12:19 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 23:12:19 f3b09c502777 Pull complete 23:12:19 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 23:12:19 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 23:12:19 apex-pdp Pulled 23:12:19 408012a7b118 Extracting [==================================================>] 637B/637B 23:12:19 408012a7b118 Extracting [==================================================>] 637B/637B 23:12:19 12c5c803443f Pull complete 23:12:19 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 23:12:19 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 23:12:19 055b9255fa03 Pull complete 23:12:19 5cfb27c10ea5 Pull complete 23:12:19 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 23:12:19 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 23:12:19 40a5eed61bb0 Extracting [==================================================>] 98B/98B 23:12:19 40a5eed61bb0 Extracting [==================================================>] 98B/98B 23:12:19 408012a7b118 Pull complete 23:12:19 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 23:12:19 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 23:12:19 e27c75a98748 Pull complete 23:12:19 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 23:12:19 b176d7edde70 Pull complete 23:12:19 40a5eed61bb0 Pull complete 23:12:19 e040ea11fa10 Extracting [==================================================>] 173B/173B 23:12:19 e040ea11fa10 Extracting [==================================================>] 173B/173B 23:12:19 grafana Pulled 23:12:19 44986281b8b9 Pull complete 23:12:19 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 23:12:19 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 23:12:19 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 23:12:19 e040ea11fa10 Pull complete 23:12:19 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 23:12:19 e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB 23:12:20 bf70c5107ab5 Pull complete 23:12:20 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 23:12:20 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 23:12:20 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 23:12:20 e73cb4a42719 Extracting [====> ] 8.913MB/109.1MB 23:12:20 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 23:12:20 09d5a3f70313 Extracting [====> ] 9.47MB/109.2MB 23:12:20 1ccde423731d Pull complete 23:12:20 e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 23:12:20 7221d93db8a9 Extracting [==================================================>] 100B/100B 23:12:20 7221d93db8a9 Extracting [==================================================>] 100B/100B 23:12:20 eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 23:12:20 09d5a3f70313 Extracting [========> ] 18.38MB/109.2MB 23:12:20 e73cb4a42719 Extracting [=======> ] 15.6MB/109.1MB 23:12:20 eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 23:12:20 7221d93db8a9 Pull complete 23:12:20 7df673c7455d Extracting [==================================================>] 694B/694B 23:12:20 7df673c7455d Extracting [==================================================>] 694B/694B 23:12:20 09d5a3f70313 Extracting [===========> ] 26.18MB/109.2MB 23:12:20 e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 23:12:20 eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 23:12:20 09d5a3f70313 Extracting [==================> ] 39.55MB/109.2MB 23:12:20 7df673c7455d Pull complete 23:12:20 e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 23:12:20 eabd8714fec9 Extracting [===============================================> ] 355.4MB/375MB 23:12:20 prometheus Pulled 23:12:20 09d5a3f70313 Extracting [======================> ] 48.46MB/109.2MB 23:12:20 e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 23:12:20 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 23:12:20 09d5a3f70313 Extracting [===========================> ] 60.72MB/109.2MB 23:12:20 e73cb4a42719 Extracting [==============> ] 31.75MB/109.1MB 23:12:20 eabd8714fec9 Extracting [================================================> ] 361MB/375MB 23:12:20 09d5a3f70313 Extracting [================================> ] 71.86MB/109.2MB 23:12:20 e73cb4a42719 Extracting [=================> ] 37.88MB/109.1MB 23:12:20 eabd8714fec9 Extracting [================================================> ] 366.5MB/375MB 23:12:20 09d5a3f70313 Extracting [=======================================> ] 86.9MB/109.2MB 23:12:20 e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 23:12:20 09d5a3f70313 Extracting [============================================> ] 97.48MB/109.2MB 23:12:20 eabd8714fec9 Extracting [=================================================> ] 371.6MB/375MB 23:12:21 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 23:12:21 e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 23:12:21 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB 23:12:21 e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB 23:12:21 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 23:12:21 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 23:12:21 09d5a3f70313 Pull complete 23:12:21 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 23:12:21 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 23:12:21 e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 23:12:21 eabd8714fec9 Pull complete 23:12:21 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 23:12:21 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 23:12:21 356f5c2c843b Pull complete 23:12:21 e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB 23:12:21 kafka Pulled 23:12:21 45fd2fec8a19 Pull complete 23:12:21 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 23:12:21 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 23:12:21 8f10199ed94b Extracting [======================> ] 4.03MB/8.768MB 23:12:21 e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 23:12:21 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 23:12:21 8f10199ed94b Pull complete 23:12:21 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 23:12:21 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 23:12:21 e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 23:12:21 e73cb4a42719 Extracting [====================================> ] 79.1MB/109.1MB 23:12:21 f963a77d2726 Pull complete 23:12:21 e73cb4a42719 Extracting [=======================================> ] 86.9MB/109.1MB 23:12:21 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 23:12:22 e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 23:12:22 f3a82e9f1761 Extracting [===============> ] 13.76MB/44.41MB 23:12:22 e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 23:12:22 f3a82e9f1761 Extracting [===========================> ] 24.77MB/44.41MB 23:12:22 f3a82e9f1761 Extracting [=============================================> ] 40.37MB/44.41MB 23:12:22 e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB 23:12:22 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 23:12:22 e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 23:12:22 e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 23:12:22 f3a82e9f1761 Pull complete 23:12:22 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 23:12:22 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 23:12:22 e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 23:12:22 79161a3f5362 Pull complete 23:12:22 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 23:12:22 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 23:12:22 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 23:12:22 9c266ba63f51 Pull complete 23:12:22 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 23:12:22 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 23:12:22 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 23:12:22 e73cb4a42719 Pull complete 23:12:22 2e8a7df9c2ee Pull complete 23:12:22 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 23:12:22 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 23:12:22 10f05dd8b1db Extracting [==================================================>] 98B/98B 23:12:22 10f05dd8b1db Extracting [==================================================>] 98B/98B 23:12:23 a83b68436f09 Pull complete 23:12:23 10f05dd8b1db Pull complete 23:12:23 787d6bee9571 Extracting [==================================================>] 127B/127B 23:12:23 787d6bee9571 Extracting [==================================================>] 127B/127B 23:12:23 41dac8b43ba6 Extracting [==================================================>] 171B/171B 23:12:23 41dac8b43ba6 Extracting [==================================================>] 171B/171B 23:12:23 787d6bee9571 Pull complete 23:12:23 13ff0988aaea Extracting [==================================================>] 167B/167B 23:12:23 13ff0988aaea Extracting [==================================================>] 167B/167B 23:12:23 41dac8b43ba6 Pull complete 23:12:23 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 23:12:23 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 23:12:23 13ff0988aaea Pull complete 23:12:23 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 23:12:23 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 23:12:23 71a9f6a9ab4d Pull complete 23:12:23 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 23:12:23 4b82842ab819 Pull complete 23:12:23 7e568a0dc8fb Extracting [==================================================>] 184B/184B 23:12:23 7e568a0dc8fb Extracting [==================================================>] 184B/184B 23:12:23 da3ed5db7103 Extracting [====> ] 10.58MB/127.4MB 23:12:23 da3ed5db7103 Extracting [========> ] 22.84MB/127.4MB 23:12:23 7e568a0dc8fb Pull complete 23:12:23 postgres Pulled 23:12:23 da3ed5db7103 Extracting [==============> ] 36.21MB/127.4MB 23:12:23 da3ed5db7103 Extracting [====================> ] 51.81MB/127.4MB 23:12:23 da3ed5db7103 Extracting [==========================> ] 66.29MB/127.4MB 23:12:24 da3ed5db7103 Extracting [================================> ] 82.44MB/127.4MB 23:12:24 da3ed5db7103 Extracting [=====================================> ] 96.37MB/127.4MB 23:12:24 da3ed5db7103 Extracting [===========================================> ] 111.4MB/127.4MB 23:12:24 da3ed5db7103 Extracting [===============================================> ] 120.3MB/127.4MB 23:12:24 da3ed5db7103 Extracting [================================================> ] 124.8MB/127.4MB 23:12:24 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 23:12:24 da3ed5db7103 Pull complete 23:12:24 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 23:12:24 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 23:12:24 c955f6e31a04 Pull complete 23:12:24 zookeeper Pulled 23:12:24 Network compose_default Creating 23:12:24 Network compose_default Created 23:12:24 Container simulator Creating 23:12:24 Container prometheus Creating 23:12:24 Container postgres Creating 23:12:24 Container zookeeper Creating 23:12:40 Container postgres Created 23:12:40 Container policy-db-migrator Creating 23:12:40 Container simulator Created 23:12:40 Container prometheus Created 23:12:40 Container grafana Creating 23:12:40 Container zookeeper Created 23:12:40 Container kafka Creating 23:12:40 Container policy-db-migrator Created 23:12:40 Container kafka Created 23:12:40 Container policy-api Creating 23:12:40 Container grafana Created 23:12:40 Container policy-api Created 23:12:40 Container policy-pap Creating 23:12:40 Container policy-pap Created 23:12:40 Container policy-apex-pdp Creating 23:12:40 Container policy-apex-pdp Created 23:12:40 Container zookeeper Starting 23:12:40 Container postgres Starting 23:12:40 Container simulator Starting 23:12:40 Container prometheus Starting 23:12:41 Container postgres Started 23:12:41 Container policy-db-migrator Starting 23:12:42 Container simulator Started 23:12:43 Container zookeeper Started 23:12:43 Container kafka Starting 23:12:44 Container policy-db-migrator Started 23:12:44 Container policy-api Starting 23:12:44 Container kafka Started 23:12:45 Container prometheus Started 23:12:45 Container grafana Starting 23:12:47 Container policy-api Started 23:12:47 Container policy-pap Starting 23:12:47 Container policy-pap Started 23:12:47 Container policy-apex-pdp Starting 23:12:48 Container policy-apex-pdp Started 23:12:48 Container grafana Started 23:12:48 Prometheus server: http://localhost:30259 23:12:48 Grafana server: http://localhost:30269 23:12:48 Waiting 1 minute for policy-pap to start... 23:13:48 Checking if REST port 30003 is open on localhost ... 23:13:49 IMAGE NAMES STATUS 23:13:49 nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 23:13:49 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 23:13:49 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 23:13:49 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 23:13:49 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 23:13:49 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute 23:13:49 Checking if REST port 30001 is open on localhost ... 23:13:49 IMAGE NAMES STATUS 23:13:49 nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 23:13:49 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 23:13:49 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 23:13:49 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 23:13:49 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 23:13:49 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 23:13:49 nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up About a minute 23:14:09 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/models'... 23:14:10 Building robot framework docker image 23:14:46 sha256:51bab04da2b02f28adf34f270c94745666211e8b9c959b1d727f57d08e69510c 23:14:50 top - 23:14:50 up 4 min, 0 users, load average: 2.35, 1.92, 0.85 23:14:50 Tasks: 232 total, 1 running, 155 sleeping, 0 stopped, 0 zombie 23:14:50 %Cpu(s): 14.4 us, 3.5 sy, 0.0 ni, 77.8 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:50 23:14:50 total used free shared buff/cache available 23:14:50 Mem: 31G 2.7G 20G 28M 8.1G 28G 23:14:50 Swap: 1.0G 0B 1.0G 23:14:50 23:14:50 IMAGE NAMES STATUS 23:14:50 nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 2 minutes 23:14:50 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 2 minutes 23:14:50 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 2 minutes 23:14:50 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 2 minutes 23:14:50 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 2 minutes 23:14:50 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 2 minutes 23:14:50 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 2 minutes 23:14:50 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 2 minutes 23:14:50 nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 2 minutes 23:14:50 23:14:53 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:53 149294c4c873 policy-apex-pdp 0.70% 221.6MiB / 31.41GiB 0.69% 49.8kB / 63.8kB 0B / 0B 52 23:14:53 705c561de1b8 policy-pap 1.18% 534.3MiB / 31.41GiB 1.66% 132kB / 220kB 0B / 139MB 68 23:14:53 0b9208b93370 policy-api 0.10% 432.7MiB / 31.41GiB 1.35% 1.15MB / 1.02MB 0B / 0B 57 23:14:53 c35f2d5ddf22 kafka 1.51% 391.9MiB / 31.41GiB 1.22% 206kB / 185kB 0B / 590kB 83 23:14:53 c1e40e4c0129 grafana 0.14% 109.1MiB / 31.41GiB 0.34% 19.1MB / 194kB 0B / 31.5MB 21 23:14:53 fed4460352c9 zookeeper 0.08% 84.66MiB / 31.41GiB 0.26% 53.3kB / 47.4kB 0B / 401kB 62 23:14:53 c456f201b6c5 prometheus 0.01% 21.16MiB / 31.41GiB 0.07% 132kB / 5.44kB 98.3kB / 0B 13 23:14:53 9a6edbfd3a53 postgres 0.00% 85.12MiB / 31.41GiB 0.26% 1.67MB / 1.73MB 4.1kB / 158MB 26 23:14:53 69aaa4c20162 simulator 0.07% 123.9MiB / 31.41GiB 0.39% 1.68kB / 0B 127kB / 0B 64 23:14:53 23:14:53 Container policy-csit Creating 23:14:53 Container policy-csit Created 23:14:53 Attaching to policy-csit 23:14:54 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 23:14:54 policy-csit | Run Robot test 23:14:54 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 23:14:54 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 23:14:54 policy-csit | -v POLICY_API_IP:policy-api:6969 23:14:54 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 23:14:54 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 23:14:54 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 23:14:54 policy-csit | -v APEX_IP:policy-apex-pdp:6969 23:14:54 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 23:14:54 policy-csit | -v KAFKA_IP:kafka:9092 23:14:54 policy-csit | -v PROMETHEUS_IP:prometheus:9090 23:14:54 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 23:14:54 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 23:14:54 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 23:14:54 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 23:14:54 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 23:14:54 policy-csit | -v TEMP_FOLDER:/tmp/distribution 23:14:54 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 23:14:54 policy-csit | -v TEST_ENV:docker 23:14:54 policy-csit | -v JAEGER_IP:jaeger:16686 23:14:54 policy-csit | Starting Robot test suites ... 23:14:54 policy-csit | ============================================================================== 23:14:54 policy-csit | Pap-Test & Pap-Slas 23:14:54 policy-csit | ============================================================================== 23:14:54 policy-csit | Pap-Test & Pap-Slas.Pap-Test 23:14:54 policy-csit | ============================================================================== 23:14:55 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:55 policy-csit | ------------------------------------------------------------------------------ 23:14:55 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:55 policy-csit | ------------------------------------------------------------------------------ 23:14:56 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:56 policy-csit | ------------------------------------------------------------------------------ 23:14:56 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 23:14:56 policy-csit | ------------------------------------------------------------------------------ 23:15:16 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:16 policy-csit | ------------------------------------------------------------------------------ 23:15:17 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:17 policy-csit | ------------------------------------------------------------------------------ 23:15:17 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:17 policy-csit | ------------------------------------------------------------------------------ 23:15:17 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:17 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:18 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:18 policy-csit | ------------------------------------------------------------------------------ 23:15:19 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:19 policy-csit | ------------------------------------------------------------------------------ 23:15:19 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:19 policy-csit | ------------------------------------------------------------------------------ 23:15:19 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:19 policy-csit | ------------------------------------------------------------------------------ 23:15:19 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:19 policy-csit | ------------------------------------------------------------------------------ 23:15:20 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:20 policy-csit | ------------------------------------------------------------------------------ 23:15:20 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:20 policy-csit | ------------------------------------------------------------------------------ 23:15:20 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:20 policy-csit | ------------------------------------------------------------------------------ 23:15:20 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:20 policy-csit | ------------------------------------------------------------------------------ 23:15:20 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 23:15:20 policy-csit | 22 tests, 22 passed, 0 failed 23:15:20 policy-csit | ============================================================================== 23:15:20 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 23:15:20 policy-csit | ============================================================================== 23:16:20 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:20 policy-csit | ------------------------------------------------------------------------------ 23:16:20 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 23:16:20 policy-csit | 8 tests, 8 passed, 0 failed 23:16:20 policy-csit | ============================================================================== 23:16:20 policy-csit | Pap-Test & Pap-Slas | PASS | 23:16:20 policy-csit | 30 tests, 30 passed, 0 failed 23:16:20 policy-csit | ============================================================================== 23:16:20 policy-csit | Output: /tmp/results/output.xml 23:16:20 policy-csit | Log: /tmp/results/log.html 23:16:20 policy-csit | Report: /tmp/results/report.html 23:16:20 policy-csit | RESULT: 0 23:16:21 policy-csit exited with code 0 23:16:21 IMAGE NAMES STATUS 23:16:21 nexus3.onap.org:10001/onap/policy-apex-pdp:4.2.1-SNAPSHOT policy-apex-pdp Up 3 minutes 23:16:21 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 23:16:21 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 23:16:21 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 23:16:21 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 23:16:21 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 23:16:21 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 23:16:21 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 23:16:21 nexus3.onap.org:10001/onap/policy-models-simulator:4.2.1-SNAPSHOT simulator Up 3 minutes 23:16:21 Shut down started! 23:16:22 Collecting logs from docker compose containers... 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.538596747Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-13T23:12:49Z 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539029958Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539044739Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539050569Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.53905488Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.53905897Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.53906339Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.53906798Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.53907194Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539101062Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539106742Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539112382Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539116822Z level=info msg=Target target=[all] 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539134433Z level=info msg="Path Home" path=/usr/share/grafana 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539139494Z level=info msg="Path Data" path=/var/lib/grafana 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539143674Z level=info msg="Path Logs" path=/var/log/grafana 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539148484Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539181426Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:23 grafana | logger=settings t=2025-06-13T23:12:49.539190216Z level=info msg="App mode production" 23:16:23 grafana | logger=featuremgmt t=2025-06-13T23:12:49.539665019Z level=info msg=FeatureToggles formatString=true dashboardSceneForViewers=true panelMonitoring=true externalCorePlugins=true dashboardScene=true preinstallAutoUpdate=true alertingUIOptimizeReducer=true promQLScope=true influxdbBackendMigration=true alertingQueryAndExpressionsStepMode=true alertingNotificationsStepMode=true dataplaneFrontendFallback=true cloudWatchCrossAccountQuerying=true pluginsDetailsRightPanel=true reportingUseRawTimeRange=true failWrongDSUID=true correlations=true awsAsyncQueryCaching=true unifiedStorageSearchPermissionFiltering=true dashboardSceneSolo=true alertingSimplifiedRouting=true annotationPermissionUpdate=true lokiQuerySplitting=true kubernetesPlaylists=true useSessionStorageForRedirection=true prometheusUsesCombobox=true logsContextDatasourceUi=true logsExploreTableVisualisation=true dashgpt=true alertRuleRestore=true logsInfiniteScrolling=true transformationsRedesign=true lokiQueryHints=true onPremToCloudMigrations=true ssoSettingsSAML=true alertingApiServer=true prometheusAzureOverrideAudience=true alertingRuleRecoverDeleted=true newDashboardSharingComponent=true logsPanelControls=true cloudWatchRoundUpEndTime=true nestedFolders=true recoveryThreshold=true kubernetesClientDashboardsFolders=true newFiltersUI=true grafanaconThemes=true tlsMemcached=true addFieldFromCalculationStatFunctions=true cloudWatchNewLabelParsing=true alertingInsights=true logRowsPopoverMenu=true publicDashboardsScene=true alertingRuleVersionHistoryRestore=true groupToNestedTableTransformation=true lokiLabelNamesQueryApi=true recordedQueriesMulti=true unifiedRequestLog=true angularDeprecationUI=true newPDFRendering=true pinNavItems=true alertingRulePermanentlyDelete=true azureMonitorPrometheusExemplars=true lokiStructuredMetadata=true ssoSettingsApi=true azureMonitorEnableUserAuth=true 23:16:23 grafana | logger=sqlstore t=2025-06-13T23:12:49.539731312Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:23 grafana | logger=sqlstore t=2025-06-13T23:12:49.539770384Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.541725128Z level=info msg="Locking database" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.541741199Z level=info msg="Starting DB migrations" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.542474374Z level=info msg="Executing migration" id="create migration_log table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.543535645Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.053671ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.547535458Z level=info msg="Executing migration" id="create user table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.548128236Z level=info msg="Migration successfully executed" id="create user table" duration=591.878µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.553943307Z level=info msg="Executing migration" id="add unique index user.login" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.554737395Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=793.439µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.558278165Z level=info msg="Executing migration" id="add unique index user.email" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.559059483Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=778.358µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.562562932Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.563395002Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=831.75µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.568688327Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.569510776Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=821.819µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.574561109Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.578387774Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.827355ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.583792314Z level=info msg="Executing migration" id="create user table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.584791052Z level=info msg="Migration successfully executed" id="create user table v2" duration=995.458µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.591520186Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.592339086Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=817.219µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.597687803Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.59846002Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=772.177µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.603532145Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.604179216Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=649.912µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.609510343Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.610534502Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.02361ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.614113104Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.616078319Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.964475ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.620634788Z level=info msg="Executing migration" id="Update user table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.620757414Z level=info msg="Migration successfully executed" id="Update user table charset" duration=79.274µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.626057689Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.627949771Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.890021ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.631550884Z level=info msg="Executing migration" id="Add missing user data" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.632128872Z level=info msg="Migration successfully executed" id="Add missing user data" duration=577.428µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.636546725Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.637750182Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.198628ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.642462289Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.643281629Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=818.69µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.647984895Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.649280148Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.296183ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.652397108Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.663101943Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.704715ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.667765968Z level=info msg="Executing migration" id="Add uid column to user" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.668656621Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=890.243µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.673383999Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.673702774Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=318.056µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.678483674Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.679299353Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=815.149µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.682501368Z level=info msg="Executing migration" id="Add is_provisioned column to user" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.683742527Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.239369ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.687041226Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.687457146Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=415.43µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.691881569Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.693900157Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=2.017817ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.700687873Z level=info msg="Executing migration" id="update login and email fields to lowercase" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.701523484Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=844.701µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.704699457Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.705036783Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=337.756µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.708039548Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.708778963Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=739.026µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.713333212Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.71390293Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=569.238µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.716781148Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.717339865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=558.437µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.720107429Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.720729859Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=621.8µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.726112888Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.726706246Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=593.338µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.732548078Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.732606521Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=61.343µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.735987703Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.737505577Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.516463ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.740912841Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.742239785Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.325594ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.749078684Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.749812849Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=729.165µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.753388791Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.754521336Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.132075ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.758107999Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.762111522Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.003542ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.768587973Z level=info msg="Executing migration" id="create temp_user v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.77039565Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.806897ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.774734699Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.775630533Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=896.783µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.779033516Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.779743431Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=710.545µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.782682492Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.783372495Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=689.833µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.786206502Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.786858353Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=648.371µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.791215343Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.791638343Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=419.88µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.795404475Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.796035575Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=580.248µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.799731193Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.800127542Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=396.149µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.804871381Z level=info msg="Executing migration" id="create star table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.805420577Z level=info msg="Migration successfully executed" id="create star table" duration=548.416µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.808216962Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.808919736Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=702.184µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.811845527Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.813014023Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.167336ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.816166595Z level=info msg="Executing migration" id="Add column org_id in star" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.817295199Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.127604ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.823229665Z level=info msg="Executing migration" id="Add column updated in star" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.824443123Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.212498ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.829222263Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.829822972Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=600.299µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.832861359Z level=info msg="Executing migration" id="create org table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.83350625Z level=info msg="Migration successfully executed" id="create org table v1" duration=644.161µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.83641825Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.837109123Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=691.063µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.84182081Z level=info msg="Executing migration" id="create org_user table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.84244521Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=623.59µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.845312098Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.845953009Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=640.021µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.84991241Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.850617354Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=704.054µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.853596777Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.854293191Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=695.534µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.859211768Z level=info msg="Executing migration" id="Update org table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.859233289Z level=info msg="Migration successfully executed" id="Update org table charset" duration=22.471µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.862004942Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.862031914Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.151µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.864318854Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.864498262Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=178.128µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.867300907Z level=info msg="Executing migration" id="create dashboard table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.867878285Z level=info msg="Migration successfully executed" id="create dashboard table" duration=576.928µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.872499438Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.873127508Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=627.24µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.877546741Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.878191852Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=646.722µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.881615477Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.8821085Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=492.333µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.887137863Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.88791978Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=781.008µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.890974077Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.891499423Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=525.766µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.89435405Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.898037387Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=3.684107ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.903833187Z level=info msg="Executing migration" id="create dashboard v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.904378243Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=544.556µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.907088753Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.90763961Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=551.217µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.910677666Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.911356819Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=678.243µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.917182109Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.917516186Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=332.956µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.920539731Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.921208983Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=668.512µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.924205838Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.924218558Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=13.06µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.929616098Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.931005805Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.388917ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.934956085Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.937169112Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.212107ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.940076312Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.941464939Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.387647ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.94606041Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.946702261Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=640.951µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.950465802Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.95187923Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.412878ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.954699896Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.955281094Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=577.878µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.960723676Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.961294724Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=570.328µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.964176563Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.96433526Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=158.057µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.967893652Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.967914343Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=20.521µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.970846844Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.972329145Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.482261ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.976645103Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.978141025Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.494372ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.981250045Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.982737386Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.485641ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.985641976Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.987139698Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.496242ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.992638533Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.993334017Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=698.204µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.997226844Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:49.998617831Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.396007ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.003938511Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.004851725Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=913.074µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.009555784Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.009623917Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=72.293µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.014463373Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.016007188Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.542555ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.021679174Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.022543126Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=863.522µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.025769673Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.030547475Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.779382ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.035421882Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.03598684Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=564.377µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.039026067Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.04010788Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.079913ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.044339156Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.045713833Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.374317ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.051450992Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.051786908Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=335.766µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.054817555Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.055391103Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=568.408µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.058589269Z level=info msg="Executing migration" id="Add check_sum column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.061956863Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.366643ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.068449548Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.069282939Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=832.641µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.07219108Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.07238015Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=189.929µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.075534603Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.075842648Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=312.535µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.079118177Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.080382229Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.263432ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.090856468Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.093830643Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.976085ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.097040149Z level=info msg="Executing migration" id="Add deleted for dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.099588543Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.550134ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.102632371Z level=info msg="Executing migration" id="Add index for deleted" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.103311644Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=678.913µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.106207015Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.107943109Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=1.736054ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.112160774Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.114693778Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.532283ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.11885873Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.119392696Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=533.056µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.122230724Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.124881863Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.647319ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.129638874Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.13058392Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=945.156µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.133616668Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.134143923Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=526.185µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.137278216Z level=info msg="Executing migration" id="create data_source table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.138259814Z level=info msg="Migration successfully executed" id="create data_source table" duration=977.757µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.143679087Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.145030133Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.348586ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.14888159Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.15031236Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.43046ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.153522456Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.154308654Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=785.628µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.157440907Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.158201584Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.827µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.162747835Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.17212049Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.373066ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.175764968Z level=info msg="Executing migration" id="create data_source table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.176695983Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=930.365µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.181246414Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.182077245Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=830.701µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.185283861Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.186151603Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=867.782µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.18918065Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.189761688Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=580.378µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.195092258Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.199014908Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.91687ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.2025349Z level=info msg="Executing migration" id="Add secure json data column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.205980557Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.447968ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.212525256Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.212551117Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.012µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.218525977Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.21878729Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=260.783µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.221134424Z level=info msg="Executing migration" id="Add read_only data column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.223595554Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.46314ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.226550918Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.226772118Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=220.64µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.229143784Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.229348474Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=204.43µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.233554788Z level=info msg="Executing migration" id="Add uid column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.23749211Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.936282ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.241100815Z level=info msg="Executing migration" id="Update uid value" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.241502215Z level=info msg="Migration successfully executed" id="Update uid value" duration=400.2µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.24593575Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.246915868Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=979.858µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.251922582Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.253408904Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.485553ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.257169247Z level=info msg="Executing migration" id="Add is_prunable column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.261878796Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.709839ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.268226825Z level=info msg="Executing migration" id="Add api_version column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.270706815Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.479711ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.277409621Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.277446043Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=40.392µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.282996883Z level=info msg="Executing migration" id="create api_key table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.284345249Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.352325ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.288306921Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.289607704Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.300083ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.293815999Z level=info msg="Executing migration" id="add index api_key.key" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.294558505Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=742.226µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.297357721Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.29814196Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=783.878µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.304376333Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.305202203Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=821.32µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.308156967Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.308959456Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=801.508µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.315940875Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.316730974Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=790.068µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.31995002Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.326894568Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.943108ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.330350356Z level=info msg="Executing migration" id="create api_key table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.33105807Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=706.464µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.336653253Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.337870372Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.21674ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.341369802Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.342581401Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.210169ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.346292601Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.347646467Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.353276ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.353602987Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.354158554Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=555.417µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.35757508Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.35838994Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=817.92µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.361409997Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.361433448Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.252µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.367512913Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.369554593Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.04128ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.372815661Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.374662301Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.84602ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.377993113Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.378153961Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=160.918µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.383587715Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.391390125Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=7.795649ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.395040402Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.397792296Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.753734ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.401038894Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.40178782Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=748.536µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.406835946Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.407403833Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=567.587µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.410572258Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.411380167Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=807.42µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.414492688Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.415516668Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.02226ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.421943801Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.423264865Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.321075ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.426626428Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.427866839Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.239341ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.431057964Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.431085015Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=28.001µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.436042976Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.436074408Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=31.092µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.439050853Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.441991856Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.939753ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.448626648Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.451371482Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.744464ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.454449592Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.454478423Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=29.022µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.45955066Z level=info msg="Executing migration" id="create quota table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.460256884Z level=info msg="Migration successfully executed" id="create quota table v1" duration=705.234µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.463324743Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.464631077Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.305384ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.468979638Z level=info msg="Executing migration" id="Update quota table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.469005849Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.841µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.475047543Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.476969017Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.922944ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.481078637Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.482077715Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=998.318µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.487160992Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.493800885Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.638243ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.499286322Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.499362606Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=85.214µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.502922799Z level=info msg="Executing migration" id="update NULL org_id to 1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.503447945Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=524.805µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.506542665Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.518516007Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=11.971602ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.52556048Z level=info msg="Executing migration" id="create session table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.526214452Z level=info msg="Migration successfully executed" id="create session table" duration=654.372µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.530020197Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.530106131Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.824µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.532200333Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.532278767Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=78.964µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.535250831Z level=info msg="Executing migration" id="create playlist table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.535901373Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=650.492µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.542604699Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.544786585Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=2.181696ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.549074084Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.549099895Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.741µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.551898551Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.551925822Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=31.651µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.556986878Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.561016654Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.029386ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.564652961Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.567863997Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.210686ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.573298452Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.573729083Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=430.681µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.578315886Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.578445312Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=131.266µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.58169906Z level=info msg="Executing migration" id="create preferences table v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.582705169Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=983.848µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.585789319Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.585816041Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=27.371µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.591528708Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.59485315Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.323382ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.598627744Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.59876266Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=134.746µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.600892924Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.603167984Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.27451ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.609526294Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.612803753Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.278269ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.61582643Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.615865642Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=35.352µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.619745391Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.620737279Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=991.088µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.623615999Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.624588166Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=974.647µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.62899045Z level=info msg="Executing migration" id="create alert table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.63001889Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.02773ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.633627646Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.634491848Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=863.192µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.638964385Z level=info msg="Executing migration" id="add index alert state" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.639960954Z level=info msg="Migration successfully executed" id="add index alert state" duration=996.399µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.643271915Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.644133317Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=860.672µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.647435687Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.648080829Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=644.812µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.651333687Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.652202609Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=868.452µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.656869616Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.658012322Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.141286ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.661784985Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.678057557Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=16.265781ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.682196108Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.683016358Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=820.28µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.689271742Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.690395587Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.124245ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.694635243Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.694951538Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=316.175µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.698030598Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.698594276Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=567.227µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.70566917Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.70650208Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=832.53µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.709844753Z level=info msg="Executing migration" id="Add column is_default" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.716289586Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.439853ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.720410797Z level=info msg="Executing migration" id="Add column frequency" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.723136109Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.724292ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.728451558Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.732348567Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.896279ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.735226637Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.739475364Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.247357ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.74330728Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.744345171Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.037911ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.752777191Z level=info msg="Executing migration" id="Update alert table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.752834173Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=59.763µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.756684031Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.756715662Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=31.951µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.760614852Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.76180365Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.190648ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.765873528Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.766994002Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.119644ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.772413506Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.77331851Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=903.874µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.77682653Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.77824944Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.415549ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.78155164Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.783161378Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.608568ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.788756511Z level=info msg="Executing migration" id="Add for to alert table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.793210217Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.452957ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.796310558Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.801093951Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.779872ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.806655811Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.806956086Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=296.594µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.815503641Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.817024555Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.522664ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.821178637Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.822127714Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=944.886µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.827704015Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.832924409Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.219004ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.837665209Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.837693101Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=30.932µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.840587481Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.841464364Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=877.223µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.844347424Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.84508844Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=741.296µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.851118994Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.851235989Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=117.066µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.85412728Z level=info msg="Executing migration" id="create annotation table v5" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.855212243Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.084323ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.858758225Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.859446349Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=687.463µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.865721354Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.867798765Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=2.073071ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.871210531Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.872580787Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.370996ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.876183163Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.877282156Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.098204ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.882931241Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.883884687Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=953.286µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.886927995Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.886957727Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.372µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.889940672Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.894569897Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.622864ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.898100889Z level=info msg="Executing migration" id="Drop category_id index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.899030904Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=929.696µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.903560984Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.908052233Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.490318ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.911575264Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.912277088Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=702.074µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.917903172Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.918887599Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=983.417µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.921787401Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.922669063Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=885.763µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.92588042Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.937010711Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.129111ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.942668886Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.943282696Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=614.34µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.946278512Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.947221548Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=941.785µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.949932999Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.950231604Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=298.105µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.953061062Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.953678182Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=616.58µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.959223441Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.959470123Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=249.672µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.962591915Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.967061202Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.468777ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.969989045Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.97502904Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.039405ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.980686045Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.981560308Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=874.022µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.98469134Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.986063797Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.371626ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.989334556Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.989695773Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=361.347µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.995132518Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:50.999389125Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.255817ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.002323436Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.003225119Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=900.793µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.006227123Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.006395041Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=167.558µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.00887688Z level=info msg="Executing migration" id="Move region to single row" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.009242918Z level=info msg="Migration successfully executed" id="Move region to single row" duration=365.348µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.014761473Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.015564751Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=802.368µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.01843898Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.019237108Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=797.549µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.025222605Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.026110668Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=887.033µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.029736462Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.030631255Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=894.113µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.03323378Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.034023738Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=791.998µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.038879521Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.039725772Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=845.351µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.042281575Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.042302866Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=20.691µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.044497801Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.044516272Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=19.061µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.050752032Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.050773273Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=21.201µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.053079584Z level=info msg="Executing migration" id="create test_data table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.054368446Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.26823ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.057665564Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.058963146Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.296552ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.062484745Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.063321926Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=837.201µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.068358658Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.069251521Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=892.572µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.072039964Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.072218743Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=176.399µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.075750643Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.076096729Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=345.636µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.082479336Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.082506487Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=28.191µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.085002617Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.091793533Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=6.791686ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.094457161Z level=info msg="Executing migration" id="create team table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.095206397Z level=info msg="Migration successfully executed" id="create team table" duration=748.126µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.100379086Z level=info msg="Executing migration" id="add index team.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.101247698Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=867.551µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.10545172Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.106581624Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.129075ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.110332814Z level=info msg="Executing migration" id="Add column uid in team" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.11483896Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.503176ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.117441506Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.117615914Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=173.869µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.122483418Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.12335624Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=872.692µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.126176515Z level=info msg="Executing migration" id="Add column external_uid in team" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.130706873Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.525907ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.134118607Z level=info msg="Executing migration" id="Add column is_provisioned in team" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.138778211Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.648163ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.14376194Z level=info msg="Executing migration" id="create team member table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.144553218Z level=info msg="Migration successfully executed" id="create team member table" duration=790.348µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.147416525Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.148393002Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=974.997µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.151552514Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.15251326Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=959.796µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.157308571Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.158606613Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.296562ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.162543062Z level=info msg="Executing migration" id="Add column email to team table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.170126066Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.583844ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.172738242Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.176086053Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.347071ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.181260981Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.186039951Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.77777ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.189893186Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.19081523Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=920.154µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.193739261Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.194572211Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=832.06µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.200015042Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.200964518Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=948.826µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.204148841Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.205090866Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=941.265µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.208115641Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.209071927Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=955.666µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.21558456Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.217299883Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.715622ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.220658824Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.222149466Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.489601ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.225335579Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.226264883Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=928.114µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.229191724Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.231159188Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.965824ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.234980622Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.235826323Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=845.12µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.241407231Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.241732626Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=324.795µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.245256756Z level=info msg="Executing migration" id="create tag table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.246043053Z level=info msg="Migration successfully executed" id="create tag table" duration=786.017µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.251818771Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.252790228Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=971.446µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.256306326Z level=info msg="Executing migration" id="create login attempt table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.257690683Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.383877ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.261737657Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.263584806Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.839869ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.268129314Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.272975497Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=4.845213ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.276517787Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.291018494Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.499647ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.29427597Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.295012866Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=736.616µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.299155505Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.30009305Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=937.295µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.304024499Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.304431148Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=406.719µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.307810151Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.308597568Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=786.757µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.313185039Z level=info msg="Executing migration" id="create user auth table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.314472391Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.287612ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.319728623Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.32132106Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.591307ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.324608128Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.324629709Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=22.292µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.329300003Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.337906406Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.605073ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.341601694Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.345324143Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.715798ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.349927284Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.355437039Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.509044ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.359506954Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.36504931Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.544546ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.368286446Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.369337516Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.04655ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.372761021Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.378101677Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.339786ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.38586692Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.394862363Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=8.992942ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.398494437Z level=info msg="Executing migration" id="create server_lock table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.399079455Z level=info msg="Migration successfully executed" id="create server_lock table" duration=584.178µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.4020905Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.402877138Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=784.047µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.407698199Z level=info msg="Executing migration" id="create user auth token table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.409286005Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.586336ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.413907067Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.414918516Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.010889ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.419115698Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.420184599Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.068511ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.424690215Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.425732446Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.04204ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.429262205Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.437502941Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.237766ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.441417349Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.442164635Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=746.836µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.446715444Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.455474874Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.756971ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.459529199Z level=info msg="Executing migration" id="create cache_data table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.460188171Z level=info msg="Migration successfully executed" id="create cache_data table" duration=658.042µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.46412676Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.465652743Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.523673ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.47057518Z level=info msg="Executing migration" id="create short_url table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.471870452Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.294772ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.475766629Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.476802679Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.0354ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.480530138Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.480595601Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.601µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.486485514Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.486999329Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=512.574µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.493187926Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.494343112Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.154945ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.497982036Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.499533331Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.545084ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.503541243Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.50555608Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.013597ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.510301068Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.510326139Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=25.741µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.512908893Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.513910902Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.001258ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.517724105Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.518646189Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=921.904µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.524027207Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.525624514Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.596547ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.529347023Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.530884167Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.536334ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.535238636Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.541282946Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.04193ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.544949163Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.545864437Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=913.754µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.549395966Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.54948038Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=84.074µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.553907893Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.555413435Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.504752ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.559151745Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.560791364Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.635538ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.564370736Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.56550155Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.130265ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.569912012Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.569932533Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=21.401µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.574521823Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.575927571Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.400567ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.579543014Z level=info msg="Executing migration" id="create alert_instance table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.580520171Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=976.567µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.584900242Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.585868938Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=967.816µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.589141456Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.590104732Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=965.097µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.595615437Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.60525767Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.640674ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.609087984Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.609743345Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=655.181µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.612896757Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.61359032Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=691.753µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.618203122Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.647249717Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.046286ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.650823609Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.677649877Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.824678ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.680991588Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.682000386Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.009608ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.686766515Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.687717491Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=950.876µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.692129963Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.698092749Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.962046ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.70144147Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.707501281Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.03631ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.712970684Z level=info msg="Executing migration" id="create alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.714060917Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.088862ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.718955282Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.720664824Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.709272ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.724412414Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.726158098Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.744894ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.730413172Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.73141092Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=996.898µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.734991912Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.735012773Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=21.211µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.738822026Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.747998717Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.176581ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.752931644Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.761951377Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.019973ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.765875806Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.772701774Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.824827ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.776130178Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.777175839Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.025839ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.781535468Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.782690153Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.155425ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.786143559Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.797222682Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=11.076382ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.800970342Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.805529191Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.557808ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.81052309Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.811571801Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.047371ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.815139432Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.82528383Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.141927ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.829302123Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.833715915Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.412822ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.838777698Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.838799689Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=23.061µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.842902486Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.844171007Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.271211ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.849469551Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.850583685Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.114204ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.854886042Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.856602974Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.715882ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.860751813Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.860788215Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=37.762µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.865801466Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.873018233Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.215237ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.876977523Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.883607991Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.626408ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.887522829Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.894369148Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.846459ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.902508669Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.913517388Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=11.009729ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.916624557Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.921343854Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.718857ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.926944003Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.926964384Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=20.721µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.932780964Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.934246384Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.46445ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.939323518Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.95103267Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.702022ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.954991231Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.955111406Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=121.026µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.958792373Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.965561098Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.781346ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.97038109Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.971475522Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.093062ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.974888046Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.985739958Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.852932ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.989091559Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.989804153Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=712.054µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.99453254Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.995733138Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.200608ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:51.999408284Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.0059727Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.563586ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.01014391Z level=info msg="Executing migration" id="create provenance_type table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.011071285Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=926.905µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.016851462Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.018327063Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.475861ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.021475244Z level=info msg="Executing migration" id="create alert_image table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.022408669Z level=info msg="Migration successfully executed" id="create alert_image table" duration=933.085µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.025702848Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.026756408Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.053121ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.031657654Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.031711016Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=53.623µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.034984813Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.036056145Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.071242ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.040364482Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.041559059Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.193667ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.046593151Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.047052463Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.050123651Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.050620875Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=496.973µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.05302873Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.054081411Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.052281ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.058482362Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.065451307Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.968585ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.069129354Z level=info msg="Executing migration" id="create library_element table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.070228226Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.098762ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.075674758Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.077598951Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.925902ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.081455336Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.082941047Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.483961ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.086800553Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.087982229Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.181387ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.092617232Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.094000108Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.381226ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.099214319Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.099348985Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=134.746µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.104121865Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.104173997Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=52.432µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.1075674Z level=info msg="Executing migration" id="add library_element folder uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.118311216Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.744226ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.123219782Z level=info msg="Executing migration" id="populate library_element folder_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.12379039Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=569.867µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.127260736Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.128511836Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.25069ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.132114189Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.13254514Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=430.431µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.136150743Z level=info msg="Executing migration" id="create data_keys table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.137300299Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.149225ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.141768743Z level=info msg="Executing migration" id="create secrets table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.1429422Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.171426ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.147897458Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.182336062Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.442865ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.186313123Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.191577466Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.259543ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.195884223Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.196073942Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=189.029µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.200782178Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.229191223Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.409035ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.233364803Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.26056062Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.194777ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.265618073Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.266369069Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=750.416µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.269830035Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.271102546Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.270761ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.27596158Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.276420262Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=458.262µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.281634762Z level=info msg="Executing migration" id="create permission table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.282641981Z level=info msg="Migration successfully executed" id="create permission table" duration=1.006568ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.286288436Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.287388529Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.099923ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.291005852Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.292113226Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.106814ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.296962709Z level=info msg="Executing migration" id="create role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.297945216Z level=info msg="Migration successfully executed" id="create role table" duration=982.278µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.301646624Z level=info msg="Executing migration" id="add column display_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.30949171Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.843297ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.313402168Z level=info msg="Executing migration" id="add column group_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.320885148Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.48238ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.325782963Z level=info msg="Executing migration" id="add index role.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.326951339Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.168056ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.330761002Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.331916408Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.154686ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.335792734Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.337015873Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.218868ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.341980831Z level=info msg="Executing migration" id="create team role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.343013051Z level=info msg="Migration successfully executed" id="create team role table" duration=1.03194ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.349391237Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.350554753Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.162686ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.354795627Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.357038515Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.244938ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.362617613Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.363896304Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.278242ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.368003861Z level=info msg="Executing migration" id="create user role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.369052842Z level=info msg="Migration successfully executed" id="create user role table" duration=1.048231ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.373405341Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.374616749Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.210898ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.379690053Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.381759702Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.068079ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.3862863Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.388122808Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.835909ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.392677717Z level=info msg="Executing migration" id="create builtin role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.393673984Z level=info msg="Migration successfully executed" id="create builtin role table" duration=995.637µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.397815673Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.399763467Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.946144ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.405960525Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.407654986Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.695481ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.411862308Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.417844176Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.981178ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.422768842Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.423926648Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.156986ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.429207952Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.430413439Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.204808ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.435494854Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.436681211Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.185897ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.440384808Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.442242448Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.856739ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.448423735Z level=info msg="Executing migration" id="create seed assignment table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.449484596Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.06068ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.45436583Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.455571438Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.202608ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.461205339Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.469403142Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.196793ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.474217774Z level=info msg="Executing migration" id="permission kind migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.482164475Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.945961ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.485689945Z level=info msg="Executing migration" id="permission attribute migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.491449272Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.758796ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.49642084Z level=info msg="Executing migration" id="permission identifier migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.504774982Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.357312ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.50952259Z level=info msg="Executing migration" id="add permission identifier index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.510693146Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.169686ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.514679387Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.515848934Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.168557ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.521463163Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.522573027Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.109594ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.527627049Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.535642165Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.014195ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.53970546Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.540899127Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.193217ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.54471691Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.545800292Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.083042ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.552467263Z level=info msg="Executing migration" id="create query_history table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.55345419Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=986.367µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.557165348Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.558300333Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.134025ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.562973137Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.563080213Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=96.435µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.56885628Z level=info msg="Executing migration" id="create query_history_details table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.570605724Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.749424ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.574174656Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.574225668Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=50.873µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.579400457Z level=info msg="Executing migration" id="teams permissions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.579915921Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=515.134µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.583205839Z level=info msg="Executing migration" id="dashboard permissions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.584269691Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.064941ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.58799615Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.589147935Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.150826ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.592432313Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.592742368Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=309.694µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.597208222Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.59778639Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=577.288µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.601134931Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.602118148Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=979.237µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.606790382Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.607976609Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.185857ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.613003551Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.621496469Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.491888ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.626527991Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.626565082Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=38.382µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.630835528Z level=info msg="Executing migration" id="create correlation table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.632524379Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.686731ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.637235285Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.638947377Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.710912ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.644223311Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.646220177Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.995676ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.65044473Z level=info msg="Executing migration" id="add correlation config column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.659754127Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.308368ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.663643914Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.666020328Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.380205ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.670744945Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.672899428Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.154013ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.676621487Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.701511173Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.888696ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.706500772Z level=info msg="Executing migration" id="create correlation v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.707270059Z level=info msg="Migration successfully executed" id="create correlation v2" duration=768.967µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.710730036Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.711520794Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=790.158µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.716398158Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.718146222Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.747374ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.72539918Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.728061758Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.660748ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.732394686Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.732823657Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=429.141µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.73705602Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.738377484Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.324853ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.743439407Z level=info msg="Executing migration" id="add provisioning column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.752497362Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.056965ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.756651952Z level=info msg="Executing migration" id="add type column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.764968071Z level=info msg="Migration successfully executed" id="add type column" duration=8.31599ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.76826871Z level=info msg="Executing migration" id="create entity_events table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.7689079Z level=info msg="Migration successfully executed" id="create entity_events table" duration=638.88µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.77409556Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.775104878Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.007869ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.779917779Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.78098217Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.786037133Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.786790269Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.790455956Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.791221652Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=764.997µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.800022125Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.801956128Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.933683ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.807068264Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.808438999Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.409267ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.814744672Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.815977922Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.232289ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.820565862Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.821684286Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.117534ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.826371451Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.827427612Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.05485ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.834937812Z level=info msg="Executing migration" id="Drop public config table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.836689107Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.750524ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.840803894Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.842570579Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.765615ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.847533707Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.848323805Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=789.298µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.851717078Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.852756258Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.03719ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.85882292Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.860806635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.973184ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.865619675Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.885813245Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.19303ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.893182569Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.901926909Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.7434ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.905743813Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.912369291Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.624098ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.91817271Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.918403561Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=230.231µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.921145243Z level=info msg="Executing migration" id="add share column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.929686243Z level=info msg="Migration successfully executed" id="add share column" duration=8.54054ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.933140209Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.933267795Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=126.336µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.937430875Z level=info msg="Executing migration" id="create file table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.938275196Z level=info msg="Migration successfully executed" id="create file table" duration=844.051µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.944426761Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.946293701Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.86628ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.950285843Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.951421347Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.134815ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.956117703Z level=info msg="Executing migration" id="create file_meta table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.957500779Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.381476ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.961137664Z level=info msg="Executing migration" id="file table idx: path key" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.962979912Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.842488ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.968618713Z level=info msg="Executing migration" id="set path collation in file table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.968639184Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.461µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.976013339Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.976033499Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.721µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.980654951Z level=info msg="Executing migration" id="managed permissions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.981207148Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=551.087µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.986144995Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.986514473Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=368.278µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.990295265Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.991979886Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.683971ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:52.99539729Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.00581263Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.41491ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.010060804Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.010218712Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.608µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.012990485Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.014112809Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.121924ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.018433086Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.018971032Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=540.346µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.02329301Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.023517721Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=224.38µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.02705165Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.027535044Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=482.883µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.031972187Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.041133387Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.16009ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.054898338Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.062477262Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.581464ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.067151307Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.067955915Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=803.898µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.072481833Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.142799281Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=70.316548ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.147443874Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.148338707Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=894.663µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.154070772Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.154877761Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=806.389µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.160465999Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.18317093Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.704121ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.18816356Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.195601637Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.432477ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.199448972Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.199668473Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=218.911µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.204697334Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.204856072Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=158.528µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.209232672Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.20939835Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=165.358µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.213890936Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.214139678Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=246.542µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.218636634Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.218788271Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=149.937µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.223253736Z level=info msg="Executing migration" id="create folder table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.223881636Z level=info msg="Migration successfully executed" id="create folder table" duration=627.66µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.228219814Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.229014032Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=793.608µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.234716356Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.235575608Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=858.642µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.24540975Z level=info msg="Executing migration" id="Update folder title length" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.245428491Z level=info msg="Migration successfully executed" id="Update folder title length" duration=18.761µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.249938638Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.250739766Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=800.558µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.255832911Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.256626569Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=793.338µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.260227372Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.261063872Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=835.75µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.265855302Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.266155947Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=300.235µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.269455175Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.269637804Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=182.379µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.272907161Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.273865057Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=960.146µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.277428978Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.278238097Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=808.739µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.282562585Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.283308151Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=745.076µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.287387067Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.28828352Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=896.033µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.293239948Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.294000474Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=759.646µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.298334343Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.299266547Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=931.015µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.302526044Z level=info msg="Executing migration" id="create anon_device table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.303429837Z level=info msg="Migration successfully executed" id="create anon_device table" duration=903.143µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.308523842Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.309715909Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.191727ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.313022508Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.314165323Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.141635ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.319035457Z level=info msg="Executing migration" id="create signing_key table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.319879958Z level=info msg="Migration successfully executed" id="create signing_key table" duration=843.9µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.325331919Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.327683222Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.349743ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.332766747Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.334538592Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.771086ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.33825512Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.338535384Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=280.514µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.348597797Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.360989392Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.401875ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.364753093Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.365367043Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=614.13µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.36925891Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.369346894Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=87.964µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.374559244Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.376606173Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.051108ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.383051902Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.383108685Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=56.903µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.386983071Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.388337966Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.353945ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.393014661Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.394259111Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.242719ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.400223947Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.402264375Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.038508ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.407714127Z level=info msg="Executing migration" id="create sso_setting table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.409663601Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.949994ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.414564616Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.415436698Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=869.182µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.42192622Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.422282837Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=356.427µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.425941113Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.426655237Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=713.654µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.431608035Z level=info msg="Executing migration" id="create cloud_migration table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.432662235Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.05308ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.437697657Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.439448371Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.749684ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.446068069Z level=info msg="Executing migration" id="add stack_id column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.456089261Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.022192ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.461249209Z level=info msg="Executing migration" id="add region_slug column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.470589837Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.339358ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.475521354Z level=info msg="Executing migration" id="add cluster_slug column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.483264946Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.743012ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.487199245Z level=info msg="Executing migration" id="add migration uid column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.496600327Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.400532ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.500490594Z level=info msg="Executing migration" id="Update uid column values for migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.500977027Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=469.882µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.506067332Z level=info msg="Executing migration" id="Add unique index migration_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.508352392Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.285009ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.515058074Z level=info msg="Executing migration" id="add migration run uid column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.525246313Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.183379ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.529852924Z level=info msg="Executing migration" id="Update uid column values for migration run" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.530123457Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=273.513µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.534782831Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.53601553Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.232429ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.540930587Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.566730726Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.80122ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.572061012Z level=info msg="Executing migration" id="create cloud_migration_session v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.572803028Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=740.886µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.578202767Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.580094408Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.890461ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.585389342Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.585729239Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=339.077µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.594600695Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.595912978Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.312273ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.600067807Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.626197283Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.127055ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.630207815Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.630869837Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=661.232µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.635387284Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.636610743Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.221309ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.641741089Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.642246144Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=504.235µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.648281444Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.650227677Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.939984ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.655420357Z level=info msg="Executing migration" id="add snapshot upload_url column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.671759351Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=16.337225ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.677253315Z level=info msg="Executing migration" id="add snapshot status column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.686037707Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=8.783802ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.689867301Z level=info msg="Executing migration" id="add snapshot local_directory column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.6998383Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.969809ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.708106478Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.715007039Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.898802ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.719678083Z level=info msg="Executing migration" id="add snapshot encryption_key column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.729282675Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.603942ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.734980399Z level=info msg="Executing migration" id="add snapshot error_string column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.745170818Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.18978ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.750077324Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.750887883Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=811.669µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.755011911Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.79829293Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=43.271448ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.803344103Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.812940554Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.596191ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.816567038Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.82411364Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.545742ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.831469534Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.844166424Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=12.69645ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.847587798Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.85449105Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=6.901692ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.860634685Z level=info msg="Executing migration" id="increase resource_uid column length" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.860654286Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=29.712µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.866426973Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.866457615Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=31.121µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.87010558Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.88176111Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.656ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.885272288Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.896862455Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.590577ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.902977679Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.903353157Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=375.158µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.906360481Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.906586652Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=226.041µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.908951326Z level=info msg="Executing migration" id="add record column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.918686554Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=9.735148ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.924246051Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.932840934Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.593722ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.93859887Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.948572639Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.972709ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.952385172Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.96003838Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.653308ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.96356906Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.964228581Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=659.041µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.973618402Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.983705727Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=10.090715ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:53.988468916Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.001411958Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=12.944682ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.005057804Z level=info msg="Executing migration" id="delete orphaned service account permissions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.005267574Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=209.19µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.009888926Z level=info msg="Executing migration" id="adding action set permissions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.01120932Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=1.361476ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.017301023Z level=info msg="Executing migration" id="create user_external_session table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.019202795Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.901712ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.026422753Z level=info msg="Executing migration" id="increase name_id column length to 1024" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.026454194Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=33.981µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.038683503Z level=info msg="Executing migration" id="increase session_id column length to 1024" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.038712415Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=36.121µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.04276219Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.043340077Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=577.267µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.04857537Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.060556497Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.981027ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.06541239Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.075265075Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.850005ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.082298324Z level=info msg="Executing migration" id="add alert_rule_state table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.083273401Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=979.227µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.08970454Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.091633993Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.929113ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.097523907Z level=info msg="Executing migration" id="add guid column to alert_rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.10776537Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.240783ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.113104437Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.1225003Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.395493ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.126270941Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.126294602Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.126511273Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.126530444Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=259.383µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.131124395Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.131760956Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=635.74µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.136528185Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.137713732Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.185137ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.141502145Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.142787537Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.284762ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.146778769Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.148328223Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.548044ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.154612316Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.156677856Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.065539ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.160841156Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.170665319Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.826293ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.174507714Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.184790029Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=10.281055ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.188276897Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.198067259Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.789442ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.20265103Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.212743606Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.093836ms 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.216084327Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.21636016Z level=info msg="Removed 0 datasources:drilldown permissions" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.21637388Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=289.504µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.219675959Z level=info msg="Executing migration" id="remove title in folder unique index" 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.220579863Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=903.614µs 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.224796626Z level=info msg="migrations completed" performed=654 skipped=0 duration=4.682351833s 23:16:23 grafana | logger=migrator t=2025-06-13T23:12:54.225551792Z level=info msg="Unlocking database" 23:16:23 grafana | logger=sqlstore t=2025-06-13T23:12:54.243267376Z level=info msg="Created default admin" user=admin 23:16:23 grafana | logger=sqlstore t=2025-06-13T23:12:54.243604112Z level=info msg="Created default organization" 23:16:23 grafana | logger=secrets t=2025-06-13T23:12:54.250184549Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:23 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T23:12:54.345852986Z level=info msg="Restored cache from database" duration=541.256µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.354569946Z level=info msg="Locking database" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.354587217Z level=info msg="Starting DB migrations" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.362106899Z level=info msg="Executing migration" id="create resource_migration_log table" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.366355943Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=4.240024ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.370480902Z level=info msg="Executing migration" id="Initialize resource tables" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.370571876Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=91.564µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.374966748Z level=info msg="Executing migration" id="drop table resource" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.375322515Z level=info msg="Migration successfully executed" id="drop table resource" duration=355.107µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.379967839Z level=info msg="Executing migration" id="create table resource" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.381767966Z level=info msg="Migration successfully executed" id="create table resource" duration=1.794106ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.385523566Z level=info msg="Executing migration" id="create table resource, index: 0" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.387774125Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=2.250149ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.391521475Z level=info msg="Executing migration" id="drop table resource_history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.391804839Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=283.374µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.396945586Z level=info msg="Executing migration" id="create table resource_history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.398122323Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.175367ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.401716286Z level=info msg="Executing migration" id="create table resource_history, index: 0" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.403107733Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.390597ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.406443624Z level=info msg="Executing migration" id="create table resource_history, index: 1" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.407712105Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.270251ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.41196519Z level=info msg="Executing migration" id="drop table resource_version" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.412140198Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=173.038µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.41550627Z level=info msg="Executing migration" id="create table resource_version" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.416452666Z level=info msg="Migration successfully executed" id="create table resource_version" duration=945.046µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.42048939Z level=info msg="Executing migration" id="create table resource_version, index: 0" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.421818314Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.326754ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.4279559Z level=info msg="Executing migration" id="drop table resource_blob" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.428114477Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=158.427µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.433693606Z level=info msg="Executing migration" id="create table resource_blob" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.435382678Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.687881ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.43980424Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.441852799Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.046899ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.446781446Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.448014366Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.23265ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.45371422Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.467126046Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.411826ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.470714269Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.479179717Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.463838ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.483443242Z level=info msg="Executing migration" id="Add index to resource_history for polling" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.484735074Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.291732ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.490140255Z level=info msg="Executing migration" id="Add index to resource for loading" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.492111059Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.969515ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.49565201Z level=info msg="Executing migration" id="Add column folder in resource_history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.508122761Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.47101ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.511421739Z level=info msg="Executing migration" id="Add column folder in resource" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.519048257Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=7.622707ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.524636726Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 23:16:23 grafana | logger=deletion-marker-migrator t=2025-06-13T23:12:54.524669977Z level=info msg="finding any deletion markers" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.525119809Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=490.694µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.528533804Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.530029846Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.494512ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.534390976Z level=info msg="Executing migration" id="Add generation to resource history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.547289517Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.898482ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.552736259Z level=info msg="Executing migration" id="Add generation index to resource history" 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.553658203Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=921.404µs 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.557556671Z level=info msg="migrations completed" performed=26 skipped=0 duration=195.497595ms 23:16:23 grafana | logger=resource-migrator t=2025-06-13T23:12:54.558739098Z level=info msg="Unlocking database" 23:16:23 grafana | t=2025-06-13T23:12:54.559138697Z level=info caller=logger.go:214 time=2025-06-13T23:12:54.559104886Z msg="Using channel notifier" logger=sql-resource-server 23:16:23 grafana | logger=plugin.store t=2025-06-13T23:12:54.5722706Z level=info msg="Loading plugins..." 23:16:23 grafana | logger=plugins.registration t=2025-06-13T23:12:54.615632438Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 23:16:23 grafana | logger=plugins.initialization t=2025-06-13T23:12:54.61566815Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 23:16:23 grafana | logger=plugin.store t=2025-06-13T23:12:54.615739173Z level=info msg="Plugins loaded" count=53 duration=43.469653ms 23:16:23 grafana | logger=query_data t=2025-06-13T23:12:54.621763433Z level=info msg="Query Service initialization" 23:16:23 grafana | logger=live.push_http t=2025-06-13T23:12:54.627290489Z level=info msg="Live Push Gateway initialization" 23:16:23 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T23:12:54.64266752Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 23:16:23 grafana | logger=ngalert t=2025-06-13T23:12:54.653120253Z level=info msg="Using simple database alert instance store" 23:16:23 grafana | logger=ngalert.state.manager.persist t=2025-06-13T23:12:54.653147075Z level=info msg="Using sync state persister" 23:16:23 grafana | logger=infra.usagestats.collector t=2025-06-13T23:12:54.657217731Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:23 grafana | logger=ngalert.state.manager t=2025-06-13T23:12:54.657729815Z level=info msg="Warming state cache for startup" 23:16:23 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T23:12:54.658061021Z level=info msg="Starting MultiOrg Alertmanager" 23:16:23 grafana | logger=grafanaStorageLogger t=2025-06-13T23:12:54.658741084Z level=info msg="Storage starting" 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:54.66136437Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 23:16:23 grafana | logger=http.server t=2025-06-13T23:12:54.675756343Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:23 grafana | logger=plugins.update.checker t=2025-06-13T23:12:54.756328074Z level=info msg="Update check succeeded" duration=97.268735ms 23:16:23 grafana | logger=grafana.update.checker t=2025-06-13T23:12:54.760436672Z level=info msg="Update check succeeded" duration=101.853175ms 23:16:23 grafana | logger=ngalert.state.manager t=2025-06-13T23:12:54.775177451Z level=info msg="State cache has been initialized" states=0 duration=117.448576ms 23:16:23 grafana | logger=ngalert.scheduler t=2025-06-13T23:12:54.775244365Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 23:16:23 grafana | logger=ticker t=2025-06-13T23:12:54.775315708Z level=info msg=starting first_tick=2025-06-13T23:13:00Z 23:16:23 grafana | logger=provisioning.datasources t=2025-06-13T23:12:54.779646937Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:23 grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.79135007Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 23:16:23 grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.801352432Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 23:16:23 grafana | logger=sqlstore.transactions t=2025-06-13T23:12:54.802510668Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 23:16:23 grafana | logger=provisioning.alerting t=2025-06-13T23:12:54.826393038Z level=info msg="starting to provision alerting" 23:16:23 grafana | logger=provisioning.alerting t=2025-06-13T23:12:54.82643192Z level=info msg="finished to provision alerting" 23:16:23 grafana | logger=provisioning.dashboard t=2025-06-13T23:12:54.853931514Z level=info msg="starting to provision dashboards" 23:16:23 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T23:12:54.928163879Z level=info msg="Patterns update finished" duration=135.153029ms 23:16:23 grafana | logger=plugin.installer t=2025-06-13T23:12:55.014542249Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 23:16:23 grafana | logger=installer.fs t=2025-06-13T23:12:55.072742732Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 23:16:23 grafana | logger=plugins.registration t=2025-06-13T23:12:55.107339108Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.107371849Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=445.984188ms 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.107396151Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.297882154Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.298612309Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.29925715Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.304618188Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.305818436Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.306435906Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.307272616Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=plugin.installer t=2025-06-13T23:12:55.307504257Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.308356268Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2025-06-13T23:12:55.310374226Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=app-registry t=2025-06-13T23:12:55.370875729Z level=info msg="app registry initialized" 23:16:23 grafana | logger=installer.fs t=2025-06-13T23:12:55.385331215Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 23:16:23 grafana | logger=plugins.registration t=2025-06-13T23:12:55.402492412Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.402517363Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=295.114072ms 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.402542174Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 23:16:23 grafana | logger=plugin.installer t=2025-06-13T23:12:55.585634722Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 23:16:23 grafana | logger=installer.fs t=2025-06-13T23:12:55.65932554Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 23:16:23 grafana | logger=plugins.registration t=2025-06-13T23:12:55.679960204Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.679983575Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=277.436051ms 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:55.680008847Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 23:16:23 grafana | logger=provisioning.dashboard t=2025-06-13T23:12:55.866948979Z level=info msg="finished to provision dashboards" 23:16:23 grafana | logger=plugin.installer t=2025-06-13T23:12:55.963248687Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 23:16:23 grafana | logger=installer.fs t=2025-06-13T23:12:56.08976819Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 23:16:23 grafana | logger=plugins.registration t=2025-06-13T23:12:56.113640999Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 23:16:23 grafana | logger=plugin.backgroundinstaller t=2025-06-13T23:12:56.113669881Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=433.655654ms 23:16:23 grafana | logger=infra.usagestats t=2025-06-13T23:14:46.667471173Z level=info msg="Usage stats are ready to report" 23:16:23 kafka | ===> User 23:16:23 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:23 kafka | ===> Configuring ... 23:16:23 kafka | Running in Zookeeper mode... 23:16:23 kafka | ===> Running preflight checks ... 23:16:23 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:23 kafka | ===> Check if Zookeeper is healthy ... 23:16:23 kafka | [2025-06-13 23:12:48,107] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,108] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,109] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,112] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,115] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:23 kafka | [2025-06-13 23:12:48,120] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:23 kafka | [2025-06-13 23:12:48,127] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:48,155] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:48,156] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:48,166] INFO Socket connection established, initiating session, client: /172.17.0.7:49484, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:48,199] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002663f0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:48,316] INFO Session: 0x1000002663f0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:48,316] INFO EventThread shut down for session: 0x1000002663f0000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:23 kafka | ===> Launching ... 23:16:23 kafka | ===> Launching kafka ... 23:16:23 kafka | [2025-06-13 23:12:49,022] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:23 kafka | [2025-06-13 23:12:49,336] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:23 kafka | [2025-06-13 23:12:49,436] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:23 kafka | [2025-06-13 23:12:49,437] INFO starting (kafka.server.KafkaServer) 23:16:23 kafka | [2025-06-13 23:12:49,438] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:23 kafka | [2025-06-13 23:12:49,451] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2025-06-13 23:12:49,455] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,455] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,455] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,455] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,456] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,458] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2025-06-13 23:12:49,462] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:23 kafka | [2025-06-13 23:12:49,468] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:49,469] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2025-06-13 23:12:49,478] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:49,486] INFO Socket connection established, initiating session, client: /172.17.0.7:49486, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:49,495] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002663f0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2025-06-13 23:12:49,500] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2025-06-13 23:12:49,796] INFO Cluster ID = 47INnyWnS9aLXhUugDQvzQ (kafka.server.KafkaServer) 23:16:23 kafka | [2025-06-13 23:12:49,800] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:23 kafka | [2025-06-13 23:12:49,849] INFO KafkaConfig values: 23:16:23 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:23 kafka | alter.config.policy.class.name = null 23:16:23 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:23 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:23 kafka | authorizer.class.name = 23:16:23 kafka | auto.create.topics.enable = true 23:16:23 kafka | auto.include.jmx.reporter = true 23:16:23 kafka | auto.leader.rebalance.enable = true 23:16:23 kafka | background.threads = 10 23:16:23 kafka | broker.heartbeat.interval.ms = 2000 23:16:23 kafka | broker.id = 1 23:16:23 kafka | broker.id.generation.enable = true 23:16:23 kafka | broker.rack = null 23:16:23 kafka | broker.session.timeout.ms = 9000 23:16:23 kafka | client.quota.callback.class = null 23:16:23 kafka | compression.type = producer 23:16:23 kafka | connection.failed.authentication.delay.ms = 100 23:16:23 kafka | connections.max.idle.ms = 600000 23:16:23 kafka | connections.max.reauth.ms = 0 23:16:23 kafka | control.plane.listener.name = null 23:16:23 kafka | controlled.shutdown.enable = true 23:16:23 kafka | controlled.shutdown.max.retries = 3 23:16:23 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:23 kafka | controller.listener.names = null 23:16:23 kafka | controller.quorum.append.linger.ms = 25 23:16:23 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:23 kafka | controller.quorum.election.timeout.ms = 1000 23:16:23 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:23 kafka | controller.quorum.request.timeout.ms = 2000 23:16:23 kafka | controller.quorum.retry.backoff.ms = 20 23:16:23 kafka | controller.quorum.voters = [] 23:16:23 kafka | controller.quota.window.num = 11 23:16:23 kafka | controller.quota.window.size.seconds = 1 23:16:23 kafka | controller.socket.timeout.ms = 30000 23:16:23 kafka | create.topic.policy.class.name = null 23:16:23 kafka | default.replication.factor = 1 23:16:23 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:23 kafka | delegation.token.expiry.time.ms = 86400000 23:16:23 kafka | delegation.token.master.key = null 23:16:23 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:23 kafka | delegation.token.secret.key = null 23:16:23 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:23 kafka | delete.topic.enable = true 23:16:23 kafka | early.start.listeners = null 23:16:23 kafka | fetch.max.bytes = 57671680 23:16:23 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:23 kafka | group.initial.rebalance.delay.ms = 3000 23:16:23 kafka | group.max.session.timeout.ms = 1800000 23:16:23 kafka | group.max.size = 2147483647 23:16:23 kafka | group.min.session.timeout.ms = 6000 23:16:23 kafka | initial.broker.registration.timeout.ms = 60000 23:16:23 kafka | inter.broker.listener.name = PLAINTEXT 23:16:23 kafka | inter.broker.protocol.version = 3.4-IV0 23:16:23 kafka | kafka.metrics.polling.interval.secs = 10 23:16:23 kafka | kafka.metrics.reporters = [] 23:16:23 kafka | leader.imbalance.check.interval.seconds = 300 23:16:23 kafka | leader.imbalance.per.broker.percentage = 10 23:16:23 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:23 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:23 kafka | log.cleaner.backoff.ms = 15000 23:16:23 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:23 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:23 kafka | log.cleaner.enable = true 23:16:23 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:23 kafka | log.cleaner.io.buffer.size = 524288 23:16:23 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:23 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:23 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:23 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:23 kafka | log.cleaner.threads = 1 23:16:23 kafka | log.cleanup.policy = [delete] 23:16:23 kafka | log.dir = /tmp/kafka-logs 23:16:23 kafka | log.dirs = /var/lib/kafka/data 23:16:23 kafka | log.flush.interval.messages = 9223372036854775807 23:16:23 kafka | log.flush.interval.ms = null 23:16:23 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:23 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:23 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:23 kafka | log.index.interval.bytes = 4096 23:16:23 kafka | log.index.size.max.bytes = 10485760 23:16:23 kafka | log.message.downconversion.enable = true 23:16:23 kafka | log.message.format.version = 3.0-IV1 23:16:23 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:23 kafka | log.message.timestamp.type = CreateTime 23:16:23 kafka | log.preallocate = false 23:16:23 kafka | log.retention.bytes = -1 23:16:23 kafka | log.retention.check.interval.ms = 300000 23:16:23 kafka | log.retention.hours = 168 23:16:23 kafka | log.retention.minutes = null 23:16:23 kafka | log.retention.ms = null 23:16:23 kafka | log.roll.hours = 168 23:16:23 kafka | log.roll.jitter.hours = 0 23:16:23 kafka | log.roll.jitter.ms = null 23:16:23 kafka | log.roll.ms = null 23:16:23 kafka | log.segment.bytes = 1073741824 23:16:23 kafka | log.segment.delete.delay.ms = 60000 23:16:23 kafka | max.connection.creation.rate = 2147483647 23:16:23 kafka | max.connections = 2147483647 23:16:23 kafka | max.connections.per.ip = 2147483647 23:16:23 kafka | max.connections.per.ip.overrides = 23:16:23 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:23 kafka | message.max.bytes = 1048588 23:16:23 kafka | metadata.log.dir = null 23:16:23 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:23 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:23 kafka | metadata.log.segment.bytes = 1073741824 23:16:23 kafka | metadata.log.segment.min.bytes = 8388608 23:16:23 kafka | metadata.log.segment.ms = 604800000 23:16:23 kafka | metadata.max.idle.interval.ms = 500 23:16:23 kafka | metadata.max.retention.bytes = 104857600 23:16:23 kafka | metadata.max.retention.ms = 604800000 23:16:23 kafka | metric.reporters = [] 23:16:23 kafka | metrics.num.samples = 2 23:16:23 kafka | metrics.recording.level = INFO 23:16:23 kafka | metrics.sample.window.ms = 30000 23:16:23 kafka | min.insync.replicas = 1 23:16:23 kafka | node.id = 1 23:16:23 kafka | num.io.threads = 8 23:16:23 kafka | num.network.threads = 3 23:16:23 kafka | num.partitions = 1 23:16:23 kafka | num.recovery.threads.per.data.dir = 1 23:16:23 kafka | num.replica.alter.log.dirs.threads = null 23:16:23 kafka | num.replica.fetchers = 1 23:16:23 kafka | offset.metadata.max.bytes = 4096 23:16:23 kafka | offsets.commit.required.acks = -1 23:16:23 kafka | offsets.commit.timeout.ms = 5000 23:16:23 kafka | offsets.load.buffer.size = 5242880 23:16:23 kafka | offsets.retention.check.interval.ms = 600000 23:16:23 kafka | offsets.retention.minutes = 10080 23:16:23 kafka | offsets.topic.compression.codec = 0 23:16:23 kafka | offsets.topic.num.partitions = 50 23:16:23 kafka | offsets.topic.replication.factor = 1 23:16:23 kafka | offsets.topic.segment.bytes = 104857600 23:16:23 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:23 kafka | password.encoder.iterations = 4096 23:16:23 kafka | password.encoder.key.length = 128 23:16:23 kafka | password.encoder.keyfactory.algorithm = null 23:16:23 kafka | password.encoder.old.secret = null 23:16:23 kafka | password.encoder.secret = null 23:16:23 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:23 kafka | process.roles = [] 23:16:23 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:23 kafka | producer.id.expiration.ms = 86400000 23:16:23 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:23 kafka | queued.max.request.bytes = -1 23:16:23 kafka | queued.max.requests = 500 23:16:23 kafka | quota.window.num = 11 23:16:23 kafka | quota.window.size.seconds = 1 23:16:23 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:23 kafka | remote.log.manager.task.interval.ms = 30000 23:16:23 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:23 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:23 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:23 kafka | remote.log.manager.thread.pool.size = 10 23:16:23 kafka | remote.log.metadata.manager.class.name = null 23:16:23 kafka | remote.log.metadata.manager.class.path = null 23:16:23 kafka | remote.log.metadata.manager.impl.prefix = null 23:16:23 kafka | remote.log.metadata.manager.listener.name = null 23:16:23 kafka | remote.log.reader.max.pending.tasks = 100 23:16:23 kafka | remote.log.reader.threads = 10 23:16:23 kafka | remote.log.storage.manager.class.name = null 23:16:23 kafka | remote.log.storage.manager.class.path = null 23:16:23 kafka | remote.log.storage.manager.impl.prefix = null 23:16:23 kafka | remote.log.storage.system.enable = false 23:16:23 kafka | replica.fetch.backoff.ms = 1000 23:16:23 kafka | replica.fetch.max.bytes = 1048576 23:16:23 kafka | replica.fetch.min.bytes = 1 23:16:23 kafka | replica.fetch.response.max.bytes = 10485760 23:16:23 kafka | replica.fetch.wait.max.ms = 500 23:16:23 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:23 kafka | replica.lag.time.max.ms = 30000 23:16:23 kafka | replica.selector.class = null 23:16:23 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:23 kafka | replica.socket.timeout.ms = 30000 23:16:23 kafka | replication.quota.window.num = 11 23:16:23 kafka | replication.quota.window.size.seconds = 1 23:16:23 kafka | request.timeout.ms = 30000 23:16:23 kafka | reserved.broker.max.id = 1000 23:16:23 kafka | sasl.client.callback.handler.class = null 23:16:23 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:23 kafka | sasl.jaas.config = null 23:16:23 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:23 kafka | sasl.kerberos.service.name = null 23:16:23 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 kafka | sasl.login.callback.handler.class = null 23:16:23 kafka | sasl.login.class = null 23:16:23 kafka | sasl.login.connect.timeout.ms = null 23:16:23 kafka | sasl.login.read.timeout.ms = null 23:16:23 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:23 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:23 kafka | sasl.login.refresh.window.factor = 0.8 23:16:23 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:23 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:23 kafka | sasl.login.retry.backoff.ms = 100 23:16:23 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:23 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:23 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 kafka | sasl.oauthbearer.expected.audience = null 23:16:23 kafka | sasl.oauthbearer.expected.issuer = null 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:23 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:23 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:23 kafka | sasl.server.callback.handler.class = null 23:16:23 kafka | sasl.server.max.receive.size = 524288 23:16:23 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:23 kafka | security.providers = null 23:16:23 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:23 kafka | socket.connection.setup.timeout.ms = 10000 23:16:23 kafka | socket.listen.backlog.size = 50 23:16:23 kafka | socket.receive.buffer.bytes = 102400 23:16:23 kafka | socket.request.max.bytes = 104857600 23:16:23 kafka | socket.send.buffer.bytes = 102400 23:16:23 kafka | ssl.cipher.suites = [] 23:16:23 kafka | ssl.client.auth = none 23:16:23 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 kafka | ssl.endpoint.identification.algorithm = https 23:16:23 kafka | ssl.engine.factory.class = null 23:16:23 kafka | ssl.key.password = null 23:16:23 kafka | ssl.keymanager.algorithm = SunX509 23:16:23 kafka | ssl.keystore.certificate.chain = null 23:16:23 kafka | ssl.keystore.key = null 23:16:23 kafka | ssl.keystore.location = null 23:16:23 kafka | ssl.keystore.password = null 23:16:23 kafka | ssl.keystore.type = JKS 23:16:23 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:23 kafka | ssl.protocol = TLSv1.3 23:16:23 kafka | ssl.provider = null 23:16:23 kafka | ssl.secure.random.implementation = null 23:16:23 kafka | ssl.trustmanager.algorithm = PKIX 23:16:23 kafka | ssl.truststore.certificates = null 23:16:23 kafka | ssl.truststore.location = null 23:16:23 kafka | ssl.truststore.password = null 23:16:23 kafka | ssl.truststore.type = JKS 23:16:23 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:23 kafka | transaction.max.timeout.ms = 900000 23:16:23 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:23 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:23 kafka | transaction.state.log.min.isr = 2 23:16:23 kafka | transaction.state.log.num.partitions = 50 23:16:23 kafka | transaction.state.log.replication.factor = 3 23:16:23 kafka | transaction.state.log.segment.bytes = 104857600 23:16:23 kafka | transactional.id.expiration.ms = 604800000 23:16:23 kafka | unclean.leader.election.enable = false 23:16:23 kafka | zookeeper.clientCnxnSocket = null 23:16:23 kafka | zookeeper.connect = zookeeper:2181 23:16:23 kafka | zookeeper.connection.timeout.ms = null 23:16:23 kafka | zookeeper.max.in.flight.requests = 10 23:16:23 kafka | zookeeper.metadata.migration.enable = false 23:16:23 kafka | zookeeper.session.timeout.ms = 18000 23:16:23 kafka | zookeeper.set.acl = false 23:16:23 kafka | zookeeper.ssl.cipher.suites = null 23:16:23 kafka | zookeeper.ssl.client.enable = false 23:16:23 kafka | zookeeper.ssl.crl.enable = false 23:16:23 kafka | zookeeper.ssl.enabled.protocols = null 23:16:23 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:23 kafka | zookeeper.ssl.keystore.location = null 23:16:23 kafka | zookeeper.ssl.keystore.password = null 23:16:23 kafka | zookeeper.ssl.keystore.type = null 23:16:23 kafka | zookeeper.ssl.ocsp.enable = false 23:16:23 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:23 kafka | zookeeper.ssl.truststore.location = null 23:16:23 kafka | zookeeper.ssl.truststore.password = null 23:16:23 kafka | zookeeper.ssl.truststore.type = null 23:16:23 kafka | (kafka.server.KafkaConfig) 23:16:23 kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2025-06-13 23:12:49,889] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2025-06-13 23:12:49,894] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2025-06-13 23:12:49,929] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:12:49,933] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:12:49,947] INFO Loaded 0 logs in 18ms. (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:12:49,947] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:12:49,949] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:12:49,959] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:23 kafka | [2025-06-13 23:12:50,010] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 23:16:23 kafka | [2025-06-13 23:12:50,034] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:23 kafka | [2025-06-13 23:12:50,052] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:23 kafka | [2025-06-13 23:12:50,099] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2025-06-13 23:12:50,464] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:23 kafka | [2025-06-13 23:12:50,468] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:23 kafka | [2025-06-13 23:12:50,490] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:23 kafka | [2025-06-13 23:12:50,491] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:23 kafka | [2025-06-13 23:12:50,491] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:23 kafka | [2025-06-13 23:12:50,495] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:23 kafka | [2025-06-13 23:12:50,500] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2025-06-13 23:12:50,524] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,525] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,527] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,528] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,542] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:23 kafka | [2025-06-13 23:12:50,568] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2025-06-13 23:12:50,592] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749856370583,1749856370583,1,0,0,72057604343267329,258,0,27 23:16:23 kafka | (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2025-06-13 23:12:50,593] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2025-06-13 23:12:50,653] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:23 kafka | [2025-06-13 23:12:50,662] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,675] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,676] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,688] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2025-06-13 23:12:50,690] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:12:50,700] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,701] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:12:50,706] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,717] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:23 kafka | [2025-06-13 23:12:50,734] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:23 kafka | [2025-06-13 23:12:50,740] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:23 kafka | [2025-06-13 23:12:50,740] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:23 kafka | [2025-06-13 23:12:50,756] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,756] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:23 kafka | [2025-06-13 23:12:50,762] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,765] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,767] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,785] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,788] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2025-06-13 23:12:50,793] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,799] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:23 kafka | [2025-06-13 23:12:50,817] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:23 kafka | [2025-06-13 23:12:50,819] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,820] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,825] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,826] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:23 kafka | [2025-06-13 23:12:50,827] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,829] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:23 kafka | [2025-06-13 23:12:50,833] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:12:50,842] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:23 kafka | [2025-06-13 23:12:50,843] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,844] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,848] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,850] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,850] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,851] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,853] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2025-06-13 23:12:50,854] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,862] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2025-06-13 23:12:50,863] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2025-06-13 23:12:50,863] INFO Kafka startTimeMs: 1749856370852 (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2025-06-13 23:12:50,864] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:23 kafka | [2025-06-13 23:12:50,866] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:23 kafka | [2025-06-13 23:12:50,870] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,870] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,872] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,875] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,876] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,910] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:50,959] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:12:51,012] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2025-06-13 23:12:51,020] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2025-06-13 23:12:55,928] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:12:55,928] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:13:19,898] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:13:19,913] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:13:19,918] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:23 kafka | [2025-06-13 23:13:19,920] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:23 kafka | [2025-06-13 23:13:19,984] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(XtISqPPyT6u0K2_KLzekQA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(g38L8b5dTFikXSBS1QHgiA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:13:19,986] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:23 kafka | [2025-06-13 23:13:19,989] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,990] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,991] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,992] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,993] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,994] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,995] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,996] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,997] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,998] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,999] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:19,999] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,000] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,001] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,004] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,016] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,175] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,176] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,177] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,178] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,179] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,180] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,181] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,182] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,182] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,185] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,185] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,186] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,187] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,188] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,189] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,191] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,201] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,203] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,204] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,209] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,210] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,211] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,255] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,256] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,258] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:23 kafka | [2025-06-13 23:13:20,258] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,337] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,353] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,362] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,363] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,365] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,388] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,389] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,389] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,389] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,391] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,399] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,399] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,399] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,399] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,399] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,410] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,411] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,411] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,411] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,412] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,420] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,421] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,421] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,421] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,421] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,433] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,434] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,434] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,434] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,434] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,443] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,445] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,445] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,445] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,446] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,458] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,460] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,460] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,460] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,461] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,471] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,473] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,473] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,473] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,473] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,489] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,489] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,490] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,493] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,493] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,500] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,501] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,501] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,501] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,502] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,511] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,515] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,515] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,515] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,515] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,524] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,525] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,525] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,525] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,525] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,534] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,535] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,535] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,535] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,535] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,547] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,548] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,548] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,548] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,548] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,559] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,560] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,560] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,560] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,560] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,567] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,567] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,567] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,567] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,567] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,579] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,580] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,580] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,580] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,581] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,592] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,593] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,593] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,593] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,593] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,606] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,609] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,609] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,609] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,609] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,618] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,619] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,619] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,619] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,619] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,638] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,640] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,640] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,640] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,640] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,647] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,648] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,648] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,649] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,649] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,655] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,656] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,656] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,657] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,657] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,664] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,665] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,665] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,665] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,665] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,672] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,673] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,673] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,673] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,673] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,680] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,681] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,681] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,681] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,681] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,688] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,689] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,689] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,689] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,689] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,696] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,696] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,696] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,697] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,697] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,705] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,706] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,706] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,706] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,706] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,716] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,718] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,718] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,718] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,719] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,726] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,726] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,727] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,727] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,727] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,736] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,737] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,737] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,737] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,738] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,751] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,752] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,752] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,752] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,752] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,761] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,762] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,762] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,762] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,762] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(XtISqPPyT6u0K2_KLzekQA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,772] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,773] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,773] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,773] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,773] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,779] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,779] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,779] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,779] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,779] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,793] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,794] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,794] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,794] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,794] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,803] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,804] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,804] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,804] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,804] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,815] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,816] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,817] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,817] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,817] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,827] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,827] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,827] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,828] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,828] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,840] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,841] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,841] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,842] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,842] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,851] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,852] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,852] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,852] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,852] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,863] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,864] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,864] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,864] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,864] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,876] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,876] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,877] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,877] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,877] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,888] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,889] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,891] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,891] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,891] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,900] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,901] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,901] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,901] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,901] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,907] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,908] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,908] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,908] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,908] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,916] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,917] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,917] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,917] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,917] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,923] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,924] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,924] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,924] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,924] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,937] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2025-06-13 23:13:20,938] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2025-06-13 23:13:20,938] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,938] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2025-06-13 23:13:20,938] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(g38L8b5dTFikXSBS1QHgiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,945] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,946] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,951] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,953] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,961] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,961] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,961] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,962] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,963] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,964] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,965] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,966] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:20,967] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,970] INFO [Broker id=1] Finished LeaderAndIsr request in 762ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,971] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,972] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,973] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,974] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2025-06-13 23:13:20,979] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=g38L8b5dTFikXSBS1QHgiA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=XtISqPPyT6u0K2_KLzekQA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,985] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,986] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,987] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,988] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:20,988] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2025-06-13 23:13:21,640] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:21,655] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:21,757] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c in Empty state. Created a new member id consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:21,761] INFO [GroupCoordinator 1]: Preparing to rebalance group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c with group instance id None; client reason: need to re-join with the given member-id: consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:21,784] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a84e6b67-b24e-431d-be69-da7e7df84a86 in Empty state. Created a new member id consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:21,789] INFO [GroupCoordinator 1]: Preparing to rebalance group a84e6b67-b24e-431d-be69-da7e7df84a86 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 with group instance id None; client reason: need to re-join with the given member-id: consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,668] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,692] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,761] INFO [GroupCoordinator 1]: Stabilized group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,774] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c for group 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,791] INFO [GroupCoordinator 1]: Stabilized group a84e6b67-b24e-431d-be69-da7e7df84a86 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2025-06-13 23:13:24,796] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 for group a84e6b67-b24e-431d-be69-da7e7df84a86 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 policy-apex-pdp | Waiting for kafka port 9092... 23:16:23 policy-apex-pdp | kafka (172.17.0.7:9092) open 23:16:23 policy-apex-pdp | Waiting for pap port 6969... 23:16:23 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:23 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:23 policy-apex-pdp | [2025-06-13T23:13:20.744+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:23 policy-apex-pdp | [2025-06-13T23:13:20.926+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-apex-pdp | allow.auto.create.topics = true 23:16:23 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | auto.offset.reset = latest 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | check.crcs = true 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-1 23:16:23 policy-apex-pdp | client.rack = 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:23 policy-apex-pdp | enable.auto.commit = true 23:16:23 policy-apex-pdp | enable.metrics.push = true 23:16:23 policy-apex-pdp | exclude.internal.topics = true 23:16:23 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:23 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:23 policy-apex-pdp | fetch.min.bytes = 1 23:16:23 policy-apex-pdp | group.id = 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c 23:16:23 policy-apex-pdp | group.instance.id = null 23:16:23 policy-apex-pdp | group.protocol = classic 23:16:23 policy-apex-pdp | group.remote.assignor = null 23:16:23 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | internal.leave.group.on.close = true 23:16:23 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-apex-pdp | isolation.level = read_uncommitted 23:16:23 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:23 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:23 policy-apex-pdp | max.poll.records = 500 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metadata.recovery.strategy = none 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retry.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.header.urlencode = false 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | session.timeout.ms = 45000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2025-06-13T23:13:20.979+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.147+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401145 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.150+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-1, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.171+00:00|INFO|ServiceManager|main] service manager starting 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.171+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.172+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.192+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-apex-pdp | allow.auto.create.topics = true 23:16:23 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | auto.offset.reset = latest 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | check.crcs = true 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2 23:16:23 policy-apex-pdp | client.rack = 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:23 policy-apex-pdp | enable.auto.commit = true 23:16:23 policy-apex-pdp | enable.metrics.push = true 23:16:23 policy-apex-pdp | exclude.internal.topics = true 23:16:23 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:23 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:23 policy-apex-pdp | fetch.min.bytes = 1 23:16:23 policy-apex-pdp | group.id = 0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c 23:16:23 policy-apex-pdp | group.instance.id = null 23:16:23 policy-apex-pdp | group.protocol = classic 23:16:23 policy-apex-pdp | group.remote.assignor = null 23:16:23 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | internal.leave.group.on.close = true 23:16:23 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-apex-pdp | isolation.level = read_uncommitted 23:16:23 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:23 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:23 policy-apex-pdp | max.poll.records = 500 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metadata.recovery.strategy = none 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retry.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.header.urlencode = false 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | session.timeout.ms = 45000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.192+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401206 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.207+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.208+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44175417-ba30-47bc-8bc5-2a04b742873a, alive=false, publisher=null]]: starting 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.221+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:23 policy-apex-pdp | acks = -1 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | batch.size = 16384 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | buffer.memory = 33554432 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = producer-1 23:16:23 policy-apex-pdp | compression.gzip.level = -1 23:16:23 policy-apex-pdp | compression.lz4.level = 9 23:16:23 policy-apex-pdp | compression.type = none 23:16:23 policy-apex-pdp | compression.zstd.level = 3 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:23 policy-apex-pdp | enable.idempotence = true 23:16:23 policy-apex-pdp | enable.metrics.push = true 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-apex-pdp | linger.ms = 0 23:16:23 policy-apex-pdp | max.block.ms = 60000 23:16:23 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:23 policy-apex-pdp | max.request.size = 1048576 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:23 policy-apex-pdp | metadata.recovery.strategy = none 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:23 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:23 policy-apex-pdp | partitioner.class = null 23:16:23 policy-apex-pdp | partitioner.ignore.keys = false 23:16:23 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retries = 2147483647 23:16:23 policy-apex-pdp | retry.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.header.urlencode = false 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:23 policy-apex-pdp | transactional.id = null 23:16:23 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.222+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.234+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.255+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856401255 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44175417-ba30-47bc-8bc5-2a04b742873a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.273+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.276+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.277+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c168660 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.280+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.294+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:23 policy-apex-pdp | [] 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.297+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4ff4360d-c2f6-4efb-9190-1bac5d7ac675","timestampMs":1749856401281,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.600+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|ServiceManager|main] service manager started 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|ServiceManager|main] service manager started 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.630+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.629+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@109f5dd8{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@415e0bcb{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@194152cf{STOPPED}}, connector=RestServerParameters@49d98dc5{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-4e7095ac==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@359c841e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64d7b720==org.glassfish.jersey.servlet.ServletContainer@2f1718b2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.726+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.726+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.727+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.738+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] (Re-)joining group 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Request joining group due to: need to re-join with the given member-id: consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c 23:16:23 policy-apex-pdp | [2025-06-13T23:13:21.759+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] (Re-)joining group 23:16:23 policy-apex-pdp | [2025-06-13T23:13:22.183+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:23 policy-apex-pdp | [2025-06-13T23:13:22.184+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.763+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c', protocol='range'} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.771+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Finished assignment for group at generation 1: {consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c=Assignment(partitions=[policy-pdp-pap-0])} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2-f03b8477-821b-4880-9983-f01f04d4017c', protocol='range'} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.780+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Adding newly assigned partitions: policy-pdp-pap-0 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.793+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Found no committed offset for partition policy-pdp-pap-0 23:16:23 policy-apex-pdp | [2025-06-13T23:13:24.814+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c-2, groupId=0cf9e9ec-8cd4-4afc-9213-9d30c6adba6c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.280+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.308+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.311+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.464+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.465+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.484+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.485+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.491+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.491+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.532+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.534+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.542+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.542+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.569+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:13:41.578+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:13:49.214+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.1 - - [13/Jun/2025:23:13:49 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" 23:16:23 policy-apex-pdp | [2025-06-13T23:13:56.127+00:00|INFO|RequestLog|qtp1089680530-27] 172.17.0.3 - policyadmin [13/Jun/2025:23:13:56 +0000] "GET /metrics HTTP/1.1" 200 2052 "" "Prometheus/3.4.1" 23:16:23 policy-apex-pdp | [2025-06-13T23:14:09.253+00:00|INFO|RequestLog|qtp1089680530-29] 172.17.0.1 - policyadmin [13/Jun/2025:23:14:09 +0000] "GET /policy/apex-pdp/v1/healthcheck HTTP/1.1" 200 109 "" "curl/7.58.0" 23:16:23 policy-apex-pdp | [2025-06-13T23:14:56.076+00:00|INFO|RequestLog|qtp1089680530-28] 172.17.0.3 - policyadmin [13/Jun/2025:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 2065 "" "Prometheus/3.4.1" 23:16:23 policy-apex-pdp | [2025-06-13T23:15:41.465+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:15:41.478+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2025-06-13T23:15:41.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2025-06-13T23:15:56.079+00:00|INFO|RequestLog|qtp1089680530-33] 172.17.0.3 - policyadmin [13/Jun/2025:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 2075 "" "Prometheus/3.4.1" 23:16:23 policy-api | Waiting for policy-db-migrator port 6824... 23:16:23 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:16:23 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:23 policy-api | 23:16:23 policy-api | . ____ _ __ _ _ 23:16:23 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:23 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:23 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:23 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:23 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:23 policy-api | 23:16:23 policy-api | :: Spring Boot :: (v3.4.6) 23:16:23 policy-api | 23:16:23 policy-api | [2025-06-13T23:12:58.284+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 23:16:23 policy-api | [2025-06-13T23:12:58.359+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 30 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:23 policy-api | [2025-06-13T23:12:58.360+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 23:16:23 policy-api | [2025-06-13T23:12:59.776+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:23 policy-api | [2025-06-13T23:12:59.933+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 146 ms. Found 6 JPA repository interfaces. 23:16:23 policy-api | [2025-06-13T23:13:00.567+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 23:16:23 policy-api | [2025-06-13T23:13:00.580+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:23 policy-api | [2025-06-13T23:13:00.581+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:23 policy-api | [2025-06-13T23:13:00.581+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 23:16:23 policy-api | [2025-06-13T23:13:00.624+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:23 policy-api | [2025-06-13T23:13:00.624+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2207 ms 23:16:23 policy-api | [2025-06-13T23:13:00.913+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:23 policy-api | [2025-06-13T23:13:00.990+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 23:16:23 policy-api | [2025-06-13T23:13:01.035+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:23 policy-api | [2025-06-13T23:13:01.396+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:23 policy-api | [2025-06-13T23:13:01.437+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:23 policy-api | [2025-06-13T23:13:01.630+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1ab21633 23:16:23 policy-api | [2025-06-13T23:13:01.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:23 policy-api | [2025-06-13T23:13:01.705+00:00|INFO|pooling|main] HHH10001005: Database info: 23:16:23 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 23:16:23 policy-api | Database driver: undefined/unknown 23:16:23 policy-api | Database version: 16.4 23:16:23 policy-api | Autocommit mode: undefined/unknown 23:16:23 policy-api | Isolation level: undefined/unknown 23:16:23 policy-api | Minimum pool size: undefined/unknown 23:16:23 policy-api | Maximum pool size: undefined/unknown 23:16:23 policy-api | [2025-06-13T23:13:03.689+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:23 policy-api | [2025-06-13T23:13:03.693+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:23 policy-api | [2025-06-13T23:13:04.353+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:23 policy-api | [2025-06-13T23:13:05.305+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:23 policy-api | [2025-06-13T23:13:06.394+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:23 policy-api | [2025-06-13T23:13:06.438+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 23:16:23 policy-api | [2025-06-13T23:13:07.106+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 23:16:23 policy-api | [2025-06-13T23:13:07.234+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:23 policy-api | [2025-06-13T23:13:07.266+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 23:16:23 policy-api | [2025-06-13T23:13:07.290+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.678 seconds (process running for 10.24) 23:16:23 policy-api | [2025-06-13T23:13:39.924+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:23 policy-api | [2025-06-13T23:13:39.925+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:23 policy-api | [2025-06-13T23:13:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 23:16:23 policy-api | [2025-06-13T23:14:54.692+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: 23:16:23 policy-api | [] 23:16:23 policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot 23:16:23 policy-csit | Run Robot test 23:16:23 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 23:16:23 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 23:16:23 policy-csit | -v POLICY_API_IP:policy-api:6969 23:16:23 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 23:16:23 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 23:16:23 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 23:16:23 policy-csit | -v APEX_IP:policy-apex-pdp:6969 23:16:23 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 23:16:23 policy-csit | -v KAFKA_IP:kafka:9092 23:16:23 policy-csit | -v PROMETHEUS_IP:prometheus:9090 23:16:23 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 23:16:23 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 23:16:23 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 23:16:23 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 23:16:23 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 23:16:23 policy-csit | -v TEMP_FOLDER:/tmp/distribution 23:16:23 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 23:16:23 policy-csit | -v TEST_ENV:docker 23:16:23 policy-csit | -v JAEGER_IP:jaeger:16686 23:16:23 policy-csit | Starting Robot test suites ... 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | Pap-Test & Pap-Slas 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | Pap-Test & Pap-Slas.Pap-Test 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | Healthcheck :: Verify policy pap health check | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | 23:16:23 policy-csit | 22 tests, 22 passed, 0 failed 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | Pap-Test & Pap-Slas.Pap-Slas 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:23 policy-csit | ------------------------------------------------------------------------------ 23:16:23 policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | 23:16:23 policy-csit | 8 tests, 8 passed, 0 failed 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | Pap-Test & Pap-Slas | PASS | 23:16:23 policy-csit | 30 tests, 30 passed, 0 failed 23:16:23 policy-csit | ============================================================================== 23:16:23 policy-csit | Output: /tmp/results/output.xml 23:16:23 policy-csit | Log: /tmp/results/log.html 23:16:23 policy-csit | Report: /tmp/results/report.html 23:16:23 policy-csit | RESULT: 0 23:16:24 policy-db-migrator | Waiting for postgres port 5432... 23:16:24 policy-db-migrator | nc: connect to postgres (172.17.0.5) port 5432 (tcp) failed: Connection refused 23:16:24 policy-db-migrator | Connection to postgres (172.17.0.5) 5432 port [tcp/postgresql] succeeded! 23:16:24 policy-db-migrator | Initializing policyadmin... 23:16:24 policy-db-migrator | 321 blocks 23:16:24 policy-db-migrator | Preparing upgrade release version: 0800 23:16:24 policy-db-migrator | Preparing upgrade release version: 0900 23:16:24 policy-db-migrator | Preparing upgrade release version: 1000 23:16:24 policy-db-migrator | Preparing upgrade release version: 1100 23:16:24 policy-db-migrator | Preparing upgrade release version: 1200 23:16:24 policy-db-migrator | Preparing upgrade release version: 1300 23:16:24 policy-db-migrator | Done 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | -------------+--------- 23:16:24 policy-db-migrator | policyadmin | 0 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 23:16:24 policy-db-migrator | (0 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | upgrade: 0 -> 1300 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:24 policy-db-migrator | msg 23:16:24 policy-db-migrator | --------------------------- 23:16:24 policy-db-migrator | upgrade to 1100 completed 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:24 policy-db-migrator | DROP INDEX 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:24 policy-db-migrator | DROP TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | -------------+--------- 23:16:24 policy-db-migrator | policyadmin | 1300 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 23:16:24 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.629799 23:16:24 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.678953 23:16:24 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.730923 23:16:24 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.780219 23:16:24 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.830839 23:16:24 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.882283 23:16:24 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.934401 23:16:24 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:45.986604 23:16:24 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.048658 23:16:24 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.102621 23:16:24 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.157818 23:16:24 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.206595 23:16:24 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.259683 23:16:24 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.313062 23:16:24 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.357615 23:16:24 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.404823 23:16:24 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.456408 23:16:24 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.500029 23:16:24 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.55267 23:16:24 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.603933 23:16:24 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.654374 23:16:24 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.707003 23:16:24 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.755557 23:16:24 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.80579 23:16:24 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.856509 23:16:24 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.90995 23:16:24 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:46.96676 23:16:24 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.015756 23:16:24 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.06485 23:16:24 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.12301 23:16:24 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.169482 23:16:24 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.21659 23:16:24 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.261942 23:16:24 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.305173 23:16:24 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.357031 23:16:24 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.408054 23:16:24 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.464093 23:16:24 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.516046 23:16:24 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.575641 23:16:24 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.631804 23:16:24 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.686071 23:16:24 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.746564 23:16:24 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.795025 23:16:24 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.845206 23:16:24 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.900518 23:16:24 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:47.961317 23:16:24 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.007093 23:16:24 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.059166 23:16:24 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.114066 23:16:24 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.16434 23:16:24 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.222838 23:16:24 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.28342 23:16:24 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.330706 23:16:24 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.391105 23:16:24 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.446722 23:16:24 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.502519 23:16:24 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.556394 23:16:24 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.606871 23:16:24 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.656448 23:16:24 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.708768 23:16:24 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.768925 23:16:24 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.821729 23:16:24 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.874511 23:16:24 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.933175 23:16:24 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:48.992591 23:16:24 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.05308 23:16:24 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.108557 23:16:24 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.171031 23:16:24 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.220815 23:16:24 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.266398 23:16:24 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.317989 23:16:24 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.363864 23:16:24 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.411905 23:16:24 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.467377 23:16:24 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.518551 23:16:24 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.566848 23:16:24 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.616006 23:16:24 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.662873 23:16:24 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.719958 23:16:24 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.76503 23:16:24 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.819396 23:16:24 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.872888 23:16:24 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.924111 23:16:24 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:49.985431 23:16:24 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.034139 23:16:24 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.085729 23:16:24 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.138293 23:16:24 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.190876 23:16:24 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.241292 23:16:24 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.286247 23:16:24 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.336116 23:16:24 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.388584 23:16:24 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.437127 23:16:24 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.491259 23:16:24 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.544381 23:16:24 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306252312450800u | 1 | 2025-06-13 23:12:50.602898 23:16:24 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.660749 23:16:24 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.712985 23:16:24 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.768339 23:16:24 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.821405 23:16:24 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.881459 23:16:24 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.92843 23:16:24 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:50.968838 23:16:24 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.020008 23:16:24 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.070606 23:16:24 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.129699 23:16:24 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.185303 23:16:24 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.240004 23:16:24 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306252312450900u | 1 | 2025-06-13 23:12:51.292709 23:16:24 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.352666 23:16:24 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.409495 23:16:24 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.45721 23:16:24 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.51233 23:16:24 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.563154 23:16:24 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.622945 23:16:24 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.680398 23:16:24 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.736595 23:16:24 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306252312451000u | 1 | 2025-06-13 23:12:51.790428 23:16:24 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306252312451100u | 1 | 2025-06-13 23:12:51.837487 23:16:24 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:51.891064 23:16:24 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:51.946559 23:16:24 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:52.006515 23:16:24 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306252312451200u | 1 | 2025-06-13 23:12:52.063242 23:16:24 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.117532 23:16:24 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.167397 23:16:24 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306252312451300u | 1 | 2025-06-13 23:12:52.218837 23:16:24 policy-db-migrator | (126 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | policyadmin: OK @ 1300 23:16:24 policy-db-migrator | Initializing clampacm... 23:16:24 policy-db-migrator | 97 blocks 23:16:24 policy-db-migrator | Preparing upgrade release version: 1400 23:16:24 policy-db-migrator | Preparing upgrade release version: 1500 23:16:24 policy-db-migrator | Preparing upgrade release version: 1600 23:16:24 policy-db-migrator | Preparing upgrade release version: 1601 23:16:24 policy-db-migrator | Preparing upgrade release version: 1700 23:16:24 policy-db-migrator | Preparing upgrade release version: 1701 23:16:24 policy-db-migrator | Done 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | ----------+--------- 23:16:24 policy-db-migrator | clampacm | 0 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 23:16:24 policy-db-migrator | (0 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | upgrade: 0 -> 1701 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-automationcomposition.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0500-participant.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-automationcomposition.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0300-participantreplica.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0400-participant.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-automationcomposition.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-automationcomposition.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-message.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-messagejob.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0200-automationcomposition.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0800-participantreplica.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | UPDATE 0 23:16:24 policy-db-migrator | ALTER TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | clampacm: OK: upgrade (1701) 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | ----------+--------- 23:16:24 policy-db-migrator | clampacm | 1701 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 23:16:24 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:52.906737 23:16:24 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:52.967887 23:16:24 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.027052 23:16:24 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.089636 23:16:24 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.14734 23:16:24 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.203471 23:16:24 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.258417 23:16:24 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.31186 23:16:24 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.370185 23:16:24 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.425815 23:16:24 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.479792 23:16:24 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.530805 23:16:24 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306252312521400u | 1 | 2025-06-13 23:12:53.584049 23:16:24 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.636411 23:16:24 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.694231 23:16:24 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.753536 23:16:24 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.804019 23:16:24 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.858749 23:16:24 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.913061 23:16:24 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:53.966506 23:16:24 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306252312521500u | 1 | 2025-06-13 23:12:54.023425 23:16:24 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306252312521600u | 1 | 2025-06-13 23:12:54.070627 23:16:24 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306252312521600u | 1 | 2025-06-13 23:12:54.123484 23:16:24 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306252312521601u | 1 | 2025-06-13 23:12:54.176313 23:16:24 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306252312521601u | 1 | 2025-06-13 23:12:54.223251 23:16:24 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.281249 23:16:24 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.332158 23:16:24 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306252312521700u | 1 | 2025-06-13 23:12:54.388734 23:16:24 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.447169 23:16:24 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.503629 23:16:24 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.556148 23:16:24 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.615559 23:16:24 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.671059 23:16:24 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.723289 23:16:24 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.780685 23:16:24 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.834724 23:16:24 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306252312521701u | 1 | 2025-06-13 23:12:54.878368 23:16:24 policy-db-migrator | (37 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | clampacm: OK @ 1701 23:16:24 policy-db-migrator | Initializing pooling... 23:16:24 policy-db-migrator | 4 blocks 23:16:24 policy-db-migrator | Preparing upgrade release version: 1600 23:16:24 policy-db-migrator | Done 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | ---------+--------- 23:16:24 policy-db-migrator | pooling | 0 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 23:16:24 policy-db-migrator | (0 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | pooling: upgrade available: 0 -> 1600 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | upgrade: 0 -> 1600 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-distributed.locking.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | pooling: OK: upgrade (1600) 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | ---------+--------- 23:16:24 policy-db-migrator | pooling | 1600 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+--------------------------- 23:16:24 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306252312551600u | 1 | 2025-06-13 23:12:55.55567 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | pooling: OK @ 1600 23:16:24 policy-db-migrator | Initializing operationshistory... 23:16:24 policy-db-migrator | 6 blocks 23:16:24 policy-db-migrator | Preparing upgrade release version: 1600 23:16:24 policy-db-migrator | Done 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | -------------------+--------- 23:16:24 policy-db-migrator | operationshistory | 0 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 23:16:24 policy-db-migrator | (0 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | upgrade: 0 -> 1600 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | rc=0 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | > upgrade 0110-operationshistory.sql 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | CREATE INDEX 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | INSERT 0 1 23:16:24 policy-db-migrator | operationshistory: OK: upgrade (1600) 23:16:24 policy-db-migrator | List of databases 23:16:24 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 23:16:24 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 23:16:24 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 23:16:24 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 23:16:24 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 23:16:24 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 23:16:24 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 23:16:24 policy-db-migrator | (9 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | CREATE TABLE 23:16:24 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 23:16:24 policy-db-migrator | name | version 23:16:24 policy-db-migrator | -------------------+--------- 23:16:24 policy-db-migrator | operationshistory | 1600 23:16:24 policy-db-migrator | (1 row) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 23:16:24 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 23:16:24 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306252312561600u | 1 | 2025-06-13 23:12:56.235619 23:16:24 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306252312561600u | 1 | 2025-06-13 23:12:56.302102 23:16:24 policy-db-migrator | (2 rows) 23:16:24 policy-db-migrator | 23:16:24 policy-db-migrator | operationshistory: OK @ 1600 23:16:24 policy-pap | Waiting for api port 6969... 23:16:24 policy-pap | api (172.17.0.8:6969) open 23:16:24 policy-pap | Waiting for kafka port 9092... 23:16:24 policy-pap | kafka (172.17.0.7:9092) open 23:16:24 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:24 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:24 policy-pap | 23:16:24 policy-pap | . ____ _ __ _ _ 23:16:24 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:24 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:24 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:24 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:24 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:24 policy-pap | 23:16:24 policy-pap | :: Spring Boot :: (v3.4.6) 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:09.890+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 51 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:24 policy-pap | [2025-06-13T23:13:09.892+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 23:16:24 policy-pap | [2025-06-13T23:13:11.392+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:24 policy-pap | [2025-06-13T23:13:11.487+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 7 JPA repository interfaces. 23:16:24 policy-pap | [2025-06-13T23:13:12.481+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 23:16:24 policy-pap | [2025-06-13T23:13:12.495+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:24 policy-pap | [2025-06-13T23:13:12.497+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:24 policy-pap | [2025-06-13T23:13:12.497+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 23:16:24 policy-pap | [2025-06-13T23:13:12.577+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:24 policy-pap | [2025-06-13T23:13:12.577+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2624 ms 23:16:24 policy-pap | [2025-06-13T23:13:13.072+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:24 policy-pap | [2025-06-13T23:13:13.155+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 23:16:24 policy-pap | [2025-06-13T23:13:13.205+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:24 policy-pap | [2025-06-13T23:13:13.635+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:24 policy-pap | [2025-06-13T23:13:13.686+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:24 policy-pap | [2025-06-13T23:13:13.916+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd 23:16:24 policy-pap | [2025-06-13T23:13:13.920+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:24 policy-pap | [2025-06-13T23:13:14.029+00:00|INFO|pooling|main] HHH10001005: Database info: 23:16:24 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 23:16:24 policy-pap | Database driver: undefined/unknown 23:16:24 policy-pap | Database version: 16.4 23:16:24 policy-pap | Autocommit mode: undefined/unknown 23:16:24 policy-pap | Isolation level: undefined/unknown 23:16:24 policy-pap | Minimum pool size: undefined/unknown 23:16:24 policy-pap | Maximum pool size: undefined/unknown 23:16:24 policy-pap | [2025-06-13T23:13:16.047+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:24 policy-pap | [2025-06-13T23:13:16.052+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:24 policy-pap | [2025-06-13T23:13:17.350+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:24 policy-pap | allow.auto.create.topics = true 23:16:24 policy-pap | auto.commit.interval.ms = 5000 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | auto.offset.reset = latest 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | check.crcs = true 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-1 23:16:24 policy-pap | client.rack = 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | default.api.timeout.ms = 60000 23:16:24 policy-pap | enable.auto.commit = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | exclude.internal.topics = true 23:16:24 policy-pap | fetch.max.bytes = 52428800 23:16:24 policy-pap | fetch.max.wait.ms = 500 23:16:24 policy-pap | fetch.min.bytes = 1 23:16:24 policy-pap | group.id = a84e6b67-b24e-431d-be69-da7e7df84a86 23:16:24 policy-pap | group.instance.id = null 23:16:24 policy-pap | group.protocol = classic 23:16:24 policy-pap | group.remote.assignor = null 23:16:24 policy-pap | heartbeat.interval.ms = 3000 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | internal.leave.group.on.close = true 23:16:24 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:24 policy-pap | isolation.level = read_uncommitted 23:16:24 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | max.partition.fetch.bytes = 1048576 23:16:24 policy-pap | max.poll.interval.ms = 300000 23:16:24 policy-pap | max.poll.records = 500 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:24 policy-pap | receive.buffer.bytes = 65536 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | session.timeout.ms = 45000 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:17.404+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:17.539+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856397537 23:16:24 policy-pap | [2025-06-13T23:13:17.542+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-1, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Subscribed to topic(s): policy-pdp-pap 23:16:24 policy-pap | [2025-06-13T23:13:17.543+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:24 policy-pap | allow.auto.create.topics = true 23:16:24 policy-pap | auto.commit.interval.ms = 5000 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | auto.offset.reset = latest 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | check.crcs = true 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = consumer-policy-pap-2 23:16:24 policy-pap | client.rack = 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | default.api.timeout.ms = 60000 23:16:24 policy-pap | enable.auto.commit = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | exclude.internal.topics = true 23:16:24 policy-pap | fetch.max.bytes = 52428800 23:16:24 policy-pap | fetch.max.wait.ms = 500 23:16:24 policy-pap | fetch.min.bytes = 1 23:16:24 policy-pap | group.id = policy-pap 23:16:24 policy-pap | group.instance.id = null 23:16:24 policy-pap | group.protocol = classic 23:16:24 policy-pap | group.remote.assignor = null 23:16:24 policy-pap | heartbeat.interval.ms = 3000 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | internal.leave.group.on.close = true 23:16:24 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:24 policy-pap | isolation.level = read_uncommitted 23:16:24 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | max.partition.fetch.bytes = 1048576 23:16:24 policy-pap | max.poll.interval.ms = 300000 23:16:24 policy-pap | max.poll.records = 500 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:24 policy-pap | receive.buffer.bytes = 65536 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | session.timeout.ms = 45000 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:17.544+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856397552 23:16:24 policy-pap | [2025-06-13T23:13:17.552+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:24 policy-pap | [2025-06-13T23:13:17.884+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:24 policy-pap | [2025-06-13T23:13:18.012+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:24 policy-pap | [2025-06-13T23:13:18.089+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 23:16:24 policy-pap | [2025-06-13T23:13:18.304+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 23:16:24 policy-pap | [2025-06-13T23:13:19.133+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 23:16:24 policy-pap | [2025-06-13T23:13:19.265+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:24 policy-pap | [2025-06-13T23:13:19.292+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 23:16:24 policy-pap | [2025-06-13T23:13:19.316+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:24 policy-pap | [2025-06-13T23:13:19.316+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:24 policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:24 policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:24 policy-pap | [2025-06-13T23:13:19.317+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:24 policy-pap | [2025-06-13T23:13:19.318+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:24 policy-pap | [2025-06-13T23:13:19.318+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:24 policy-pap | [2025-06-13T23:13:19.320+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@76ec6ae0 23:16:24 policy-pap | [2025-06-13T23:13:19.332+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.333+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:24 policy-pap | allow.auto.create.topics = true 23:16:24 policy-pap | auto.commit.interval.ms = 5000 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | auto.offset.reset = latest 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | check.crcs = true 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3 23:16:24 policy-pap | client.rack = 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | default.api.timeout.ms = 60000 23:16:24 policy-pap | enable.auto.commit = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | exclude.internal.topics = true 23:16:24 policy-pap | fetch.max.bytes = 52428800 23:16:24 policy-pap | fetch.max.wait.ms = 500 23:16:24 policy-pap | fetch.min.bytes = 1 23:16:24 policy-pap | group.id = a84e6b67-b24e-431d-be69-da7e7df84a86 23:16:24 policy-pap | group.instance.id = null 23:16:24 policy-pap | group.protocol = classic 23:16:24 policy-pap | group.remote.assignor = null 23:16:24 policy-pap | heartbeat.interval.ms = 3000 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | internal.leave.group.on.close = true 23:16:24 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:24 policy-pap | isolation.level = read_uncommitted 23:16:24 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | max.partition.fetch.bytes = 1048576 23:16:24 policy-pap | max.poll.interval.ms = 300000 23:16:24 policy-pap | max.poll.records = 500 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:24 policy-pap | receive.buffer.bytes = 65536 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | session.timeout.ms = 45000 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:19.333+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:19.341+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399341 23:16:24 policy-pap | [2025-06-13T23:13:19.342+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Subscribed to topic(s): policy-pdp-pap 23:16:24 policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:24 policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@48a5ef5c 23:16:24 policy-pap | [2025-06-13T23:13:19.343+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.344+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:24 policy-pap | allow.auto.create.topics = true 23:16:24 policy-pap | auto.commit.interval.ms = 5000 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | auto.offset.reset = latest 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | check.crcs = true 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = consumer-policy-pap-4 23:16:24 policy-pap | client.rack = 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | default.api.timeout.ms = 60000 23:16:24 policy-pap | enable.auto.commit = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | exclude.internal.topics = true 23:16:24 policy-pap | fetch.max.bytes = 52428800 23:16:24 policy-pap | fetch.max.wait.ms = 500 23:16:24 policy-pap | fetch.min.bytes = 1 23:16:24 policy-pap | group.id = policy-pap 23:16:24 policy-pap | group.instance.id = null 23:16:24 policy-pap | group.protocol = classic 23:16:24 policy-pap | group.remote.assignor = null 23:16:24 policy-pap | heartbeat.interval.ms = 3000 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | internal.leave.group.on.close = true 23:16:24 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:24 policy-pap | isolation.level = read_uncommitted 23:16:24 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | max.partition.fetch.bytes = 1048576 23:16:24 policy-pap | max.poll.interval.ms = 300000 23:16:24 policy-pap | max.poll.records = 500 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:24 policy-pap | receive.buffer.bytes = 65536 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | session.timeout.ms = 45000 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:19.344+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:19.350+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399350 23:16:24 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:24 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:24 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=87659d9d-85dc-44ce-a3c8-1da58443d544, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a84e6b67-b24e-431d-be69-da7e7df84a86, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.351+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d5491ea2-aac8-4b21-abb9-b5cce523dbc1, alive=false, publisher=null]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.366+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:24 policy-pap | acks = -1 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | batch.size = 16384 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | buffer.memory = 33554432 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = producer-1 23:16:24 policy-pap | compression.gzip.level = -1 23:16:24 policy-pap | compression.lz4.level = 9 23:16:24 policy-pap | compression.type = none 23:16:24 policy-pap | compression.zstd.level = 3 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | delivery.timeout.ms = 120000 23:16:24 policy-pap | enable.idempotence = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:24 policy-pap | linger.ms = 0 23:16:24 policy-pap | max.block.ms = 60000 23:16:24 policy-pap | max.in.flight.requests.per.connection = 5 23:16:24 policy-pap | max.request.size = 1048576 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.max.idle.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:24 policy-pap | partitioner.availability.timeout.ms = 0 23:16:24 policy-pap | partitioner.class = null 23:16:24 policy-pap | partitioner.ignore.keys = false 23:16:24 policy-pap | receive.buffer.bytes = 32768 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retries = 2147483647 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | transaction.timeout.ms = 60000 23:16:24 policy-pap | transactional.id = null 23:16:24 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:19.367+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:19.386+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:24 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399403 23:16:24 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d5491ea2-aac8-4b21-abb9-b5cce523dbc1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:24 policy-pap | [2025-06-13T23:13:19.404+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c40c9331-4a56-4369-9ae1-ba78201c3bfa, alive=false, publisher=null]]: starting 23:16:24 policy-pap | [2025-06-13T23:13:19.405+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:24 policy-pap | acks = -1 23:16:24 policy-pap | auto.include.jmx.reporter = true 23:16:24 policy-pap | batch.size = 16384 23:16:24 policy-pap | bootstrap.servers = [kafka:9092] 23:16:24 policy-pap | buffer.memory = 33554432 23:16:24 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:24 policy-pap | client.id = producer-2 23:16:24 policy-pap | compression.gzip.level = -1 23:16:24 policy-pap | compression.lz4.level = 9 23:16:24 policy-pap | compression.type = none 23:16:24 policy-pap | compression.zstd.level = 3 23:16:24 policy-pap | connections.max.idle.ms = 540000 23:16:24 policy-pap | delivery.timeout.ms = 120000 23:16:24 policy-pap | enable.idempotence = true 23:16:24 policy-pap | enable.metrics.push = true 23:16:24 policy-pap | interceptor.classes = [] 23:16:24 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:24 policy-pap | linger.ms = 0 23:16:24 policy-pap | max.block.ms = 60000 23:16:24 policy-pap | max.in.flight.requests.per.connection = 5 23:16:24 policy-pap | max.request.size = 1048576 23:16:24 policy-pap | metadata.max.age.ms = 300000 23:16:24 policy-pap | metadata.max.idle.ms = 300000 23:16:24 policy-pap | metadata.recovery.strategy = none 23:16:24 policy-pap | metric.reporters = [] 23:16:24 policy-pap | metrics.num.samples = 2 23:16:24 policy-pap | metrics.recording.level = INFO 23:16:24 policy-pap | metrics.sample.window.ms = 30000 23:16:24 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:24 policy-pap | partitioner.availability.timeout.ms = 0 23:16:24 policy-pap | partitioner.class = null 23:16:24 policy-pap | partitioner.ignore.keys = false 23:16:24 policy-pap | receive.buffer.bytes = 32768 23:16:24 policy-pap | reconnect.backoff.max.ms = 1000 23:16:24 policy-pap | reconnect.backoff.ms = 50 23:16:24 policy-pap | request.timeout.ms = 30000 23:16:24 policy-pap | retries = 2147483647 23:16:24 policy-pap | retry.backoff.max.ms = 1000 23:16:24 policy-pap | retry.backoff.ms = 100 23:16:24 policy-pap | sasl.client.callback.handler.class = null 23:16:24 policy-pap | sasl.jaas.config = null 23:16:24 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:24 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:24 policy-pap | sasl.kerberos.service.name = null 23:16:24 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:24 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:24 policy-pap | sasl.login.callback.handler.class = null 23:16:24 policy-pap | sasl.login.class = null 23:16:24 policy-pap | sasl.login.connect.timeout.ms = null 23:16:24 policy-pap | sasl.login.read.timeout.ms = null 23:16:24 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:24 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:24 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:24 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:24 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.mechanism = GSSAPI 23:16:24 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:24 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:24 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:24 policy-pap | sasl.oauthbearer.header.urlencode = false 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:24 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:24 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:24 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:24 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:24 policy-pap | security.protocol = PLAINTEXT 23:16:24 policy-pap | security.providers = null 23:16:24 policy-pap | send.buffer.bytes = 131072 23:16:24 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:24 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:24 policy-pap | ssl.cipher.suites = null 23:16:24 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:24 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:24 policy-pap | ssl.engine.factory.class = null 23:16:24 policy-pap | ssl.key.password = null 23:16:24 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:24 policy-pap | ssl.keystore.certificate.chain = null 23:16:24 policy-pap | ssl.keystore.key = null 23:16:24 policy-pap | ssl.keystore.location = null 23:16:24 policy-pap | ssl.keystore.password = null 23:16:24 policy-pap | ssl.keystore.type = JKS 23:16:24 policy-pap | ssl.protocol = TLSv1.3 23:16:24 policy-pap | ssl.provider = null 23:16:24 policy-pap | ssl.secure.random.implementation = null 23:16:24 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:24 policy-pap | ssl.truststore.certificates = null 23:16:24 policy-pap | ssl.truststore.location = null 23:16:24 policy-pap | ssl.truststore.password = null 23:16:24 policy-pap | ssl.truststore.type = JKS 23:16:24 policy-pap | transaction.timeout.ms = 60000 23:16:24 policy-pap | transactional.id = null 23:16:24 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:24 policy-pap | 23:16:24 policy-pap | [2025-06-13T23:13:19.405+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 23:16:24 policy-pap | [2025-06-13T23:13:19.406+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749856399410 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c40c9331-4a56-4369-9ae1-ba78201c3bfa, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:24 policy-pap | [2025-06-13T23:13:19.410+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:24 policy-pap | [2025-06-13T23:13:19.413+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:24 policy-pap | [2025-06-13T23:13:19.413+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:24 policy-pap | [2025-06-13T23:13:19.414+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:24 policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:24 policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:24 policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:24 policy-pap | [2025-06-13T23:13:19.415+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:24 policy-pap | [2025-06-13T23:13:19.416+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:24 policy-pap | [2025-06-13T23:13:19.417+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:24 policy-pap | [2025-06-13T23:13:19.417+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.407 seconds (process running for 10.965) 23:16:24 policy-pap | [2025-06-13T23:13:19.870+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:24 policy-pap | [2025-06-13T23:13:19.870+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:24 policy-pap | [2025-06-13T23:13:19.874+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:24 policy-pap | [2025-06-13T23:13:19.874+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:24 policy-pap | [2025-06-13T23:13:19.925+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:16:24 policy-pap | [2025-06-13T23:13:19.925+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:16:24 policy-pap | [2025-06-13T23:13:19.969+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:19.970+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 47INnyWnS9aLXhUugDQvzQ 23:16:24 policy-pap | [2025-06-13T23:13:20.106+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:24 policy-pap | [2025-06-13T23:13:20.112+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:20.354+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:20.385+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:20.818+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:20.872+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:24 policy-pap | [2025-06-13T23:13:21.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:24 policy-pap | [2025-06-13T23:13:21.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:24 policy-pap | [2025-06-13T23:13:21.646+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6 23:16:24 policy-pap | [2025-06-13T23:13:21.646+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:24 policy-pap | [2025-06-13T23:13:21.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:24 policy-pap | [2025-06-13T23:13:21.778+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] (Re-)joining group 23:16:24 policy-pap | [2025-06-13T23:13:21.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Request joining group due to: need to re-join with the given member-id: consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5 23:16:24 policy-pap | [2025-06-13T23:13:21.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] (Re-)joining group 23:16:24 policy-pap | [2025-06-13T23:13:24.672+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6', protocol='range'} 23:16:24 policy-pap | [2025-06-13T23:13:24.681+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6=Assignment(partitions=[policy-pdp-pap-0])} 23:16:24 policy-pap | [2025-06-13T23:13:24.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cca724c0-ca48-47ec-9676-c63cc959bcf6', protocol='range'} 23:16:24 policy-pap | [2025-06-13T23:13:24.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:24 policy-pap | [2025-06-13T23:13:24.708+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:24 policy-pap | [2025-06-13T23:13:24.722+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:24 policy-pap | [2025-06-13T23:13:24.735+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:24 policy-pap | [2025-06-13T23:13:24.792+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5', protocol='range'} 23:16:24 policy-pap | [2025-06-13T23:13:24.793+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Finished assignment for group at generation 1: {consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5=Assignment(partitions=[policy-pdp-pap-0])} 23:16:24 policy-pap | [2025-06-13T23:13:24.808+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3-89b4705e-d181-4123-a9a8-1ed4b11ab6c5', protocol='range'} 23:16:24 policy-pap | [2025-06-13T23:13:24.809+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:24 policy-pap | [2025-06-13T23:13:24.809+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Adding newly assigned partitions: policy-pdp-pap-0 23:16:24 policy-pap | [2025-06-13T23:13:24.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Found no committed offset for partition policy-pdp-pap-0 23:16:24 policy-pap | [2025-06-13T23:13:24.813+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a84e6b67-b24e-431d-be69-da7e7df84a86-3, groupId=a84e6b67-b24e-431d-be69-da7e7df84a86] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:24 policy-pap | [2025-06-13T23:13:41.318+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:24 policy-pap | [] 23:16:24 policy-pap | [2025-06-13T23:13:41.319+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:24 policy-pap | [2025-06-13T23:13:41.319+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"23e412ee-e25c-4dfd-95ce-cb9e23a3dd92","timestampMs":1749856421280,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:24 policy-pap | [2025-06-13T23:13:41.329+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:24 policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting 23:16:24 policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting listener 23:16:24 policy-pap | [2025-06-13T23:13:41.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting timer 23:16:24 policy-pap | [2025-06-13T23:13:41.405+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] 23:16:24 policy-pap | [2025-06-13T23:13:41.406+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.407+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate started 23:16:24 policy-pap | [2025-06-13T23:13:41.406+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] 23:16:24 policy-pap | [2025-06-13T23:13:41.412+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.453+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.454+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:24 policy-pap | [2025-06-13T23:13:41.455+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"174671f0-1735-4888-97b7-4da8a7d88fa3","timestampMs":1749856421389,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.455+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:24 policy-pap | [2025-06-13T23:13:41.484+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:24 policy-pap | [2025-06-13T23:13:41.486+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2e0c0d22-652d-40c3-b971-529ea20d635e","timestampMs":1749856421464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup"} 23:16:24 policy-pap | [2025-06-13T23:13:41.486+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:24 policy-pap | [2025-06-13T23:13:41.491+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"174671f0-1735-4888-97b7-4da8a7d88fa3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"e9c1012b-b24c-4833-bb93-1e7ff20a0e0e","timestampMs":1749856421465,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping 23:16:24 policy-pap | [2025-06-13T23:13:41.511+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 174671f0-1735-4888-97b7-4da8a7d88fa3 23:16:24 policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping timer 23:16:24 policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] 23:16:24 policy-pap | [2025-06-13T23:13:41.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping listener 23:16:24 policy-pap | [2025-06-13T23:13:41.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopped 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate successful 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 start publishing next request 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting listener 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting timer 23:16:24 policy-pap | [2025-06-13T23:13:41.520+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] 23:16:24 policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] 23:16:24 policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange starting enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.521+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.522+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange started 23:16:24 policy-pap | [2025-06-13T23:13:41.534+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.534+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:24 policy-pap | [2025-06-13T23:13:41.542+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.543+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f02a73d5-f851-430e-8ac3-44980b8e59ce 23:16:24 policy-pap | [2025-06-13T23:13:41.554+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f02a73d5-f851-430e-8ac3-44980b8e59ce","timestampMs":1749856421390,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.554+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f02a73d5-f851-430e-8ac3-44980b8e59ce","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"17b1f113-d210-45b4-8b0e-4d26a129ed40","timestampMs":1749856421534,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping timer 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopping listener 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange stopped 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpStateChange successful 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 start publishing next request 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting listener 23:16:24 policy-pap | [2025-06-13T23:13:41.556+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting timer 23:16:24 policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=e476a123-aac9-4977-b04d-80df5d07a19a, expireMs=1749856451556] 23:16:24 policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate starting enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate started 23:16:24 policy-pap | [2025-06-13T23:13:41.557+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.566+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.566+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:24 policy-pap | [2025-06-13T23:13:41.568+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"source":"pap-31a53688-2b72-4290-b32e-d4cf0ec2cb7d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e476a123-aac9-4977-b04d-80df5d07a19a","timestampMs":1749856421547,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.568+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:24 policy-pap | [2025-06-13T23:13:41.579+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e476a123-aac9-4977-b04d-80df5d07a19a 23:16:24 policy-pap | [2025-06-13T23:13:41.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e476a123-aac9-4977-b04d-80df5d07a19a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"272ed6da-38e1-4883-9655-2495f0ffae04","timestampMs":1749856421569,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping enqueue 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping timer 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e476a123-aac9-4977-b04d-80df5d07a19a, expireMs=1749856451556] 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopping listener 23:16:24 policy-pap | [2025-06-13T23:13:41.580+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate stopped 23:16:24 policy-pap | [2025-06-13T23:13:41.585+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 PdpUpdate successful 23:16:24 policy-pap | [2025-06-13T23:13:41.585+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-04c75d10-4c10-4f87-a04c-812ddcb845f3 has no more requests 23:16:24 policy-pap | [2025-06-13T23:13:41.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:24 policy-pap | [2025-06-13T23:13:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' 23:16:24 policy-pap | [2025-06-13T23:13:41.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 2 ms 23:16:24 policy-pap | [2025-06-13T23:14:11.406+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=174671f0-1735-4888-97b7-4da8a7d88fa3, expireMs=1749856451405] 23:16:24 policy-pap | [2025-06-13T23:14:11.520+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=f02a73d5-f851-430e-8ac3-44980b8e59ce, expireMs=1749856451520] 23:16:24 policy-pap | [2025-06-13T23:15:16.773+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:24 policy-pap | [2025-06-13T23:15:16.780+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:24 policy-pap | [2025-06-13T23:15:17.165+00:00|INFO|SessionData|http-nio-6969-exec-9] unknown group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:17.783+00:00|INFO|SessionData|http-nio-6969-exec-9] create cached group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:17.783+00:00|INFO|SessionData|http-nio-6969-exec-9] creating DB group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:18.256+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:18.551+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:24 policy-pap | [2025-06-13T23:15:18.639+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:24 policy-pap | [2025-06-13T23:15:18.639+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:18.640+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:18.653+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T23:15:18Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T23:15:18Z, user=policyadmin)] 23:16:24 policy-pap | [2025-06-13T23:15:19.334+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:24 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:24 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.335+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.346+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T23:15:19Z, user=policyadmin)] 23:16:24 policy-pap | [2025-06-13T23:15:19.417+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 23:16:24 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group defaultGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:24 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:24 policy-pap | [2025-06-13T23:15:19.734+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.735+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:19.743+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T23:15:19Z, user=policyadmin)] 23:16:24 policy-pap | [2025-06-13T23:15:20.271+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:20.273+00:00|INFO|SessionData|http-nio-6969-exec-3] deleting DB group testGroup 23:16:24 policy-pap | [2025-06-13T23:15:41.477+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 policy-pap | [2025-06-13T23:15:41.478+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:24 policy-pap | [2025-06-13T23:15:41.479+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:24 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","policies":[],"messageName":"PDP_STATUS","requestId":"d41cd52d-d758-489d-aabf-6f452c8bf3fc","timestampMs":1749856541464,"name":"apex-04c75d10-4c10-4f87-a04c-812ddcb845f3","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:24 postgres | The files belonging to this database system will be owned by user "postgres". 23:16:24 postgres | This user must also own the server process. 23:16:24 postgres | 23:16:24 postgres | The database cluster will be initialized with locale "en_US.utf8". 23:16:24 postgres | The default database encoding has accordingly been set to "UTF8". 23:16:24 postgres | The default text search configuration will be set to "english". 23:16:24 postgres | 23:16:24 postgres | Data page checksums are disabled. 23:16:24 postgres | 23:16:24 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 23:16:24 postgres | creating subdirectories ... ok 23:16:24 postgres | selecting dynamic shared memory implementation ... posix 23:16:24 postgres | selecting default max_connections ... 100 23:16:24 postgres | selecting default shared_buffers ... 128MB 23:16:24 postgres | selecting default time zone ... Etc/UTC 23:16:24 postgres | creating configuration files ... ok 23:16:24 postgres | running bootstrap script ... ok 23:16:24 postgres | performing post-bootstrap initialization ... ok 23:16:24 postgres | syncing data to disk ... ok 23:16:24 postgres | 23:16:24 postgres | 23:16:24 postgres | Success. You can now start the database server using: 23:16:24 postgres | 23:16:24 postgres | initdb: warning: enabling "trust" authentication for local connections 23:16:24 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 23:16:24 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 23:16:24 postgres | 23:16:24 postgres | waiting for server to start....2025-06-13 23:12:42.330 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 23:16:24 postgres | 2025-06-13 23:12:42.332 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 23:16:24 postgres | 2025-06-13 23:12:42.338 UTC [52] LOG: database system was shut down at 2025-06-13 23:12:41 UTC 23:16:24 postgres | 2025-06-13 23:12:42.344 UTC [49] LOG: database system is ready to accept connections 23:16:24 postgres | done 23:16:24 postgres | server started 23:16:24 postgres | 23:16:24 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 23:16:24 postgres | 23:16:24 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 23:16:24 postgres | #!/bin/bash -xv 23:16:24 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 23:16:24 postgres | # 23:16:24 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:24 postgres | # you may not use this file except in compliance with the License. 23:16:24 postgres | # You may obtain a copy of the License at 23:16:24 postgres | # 23:16:24 postgres | # http://www.apache.org/licenses/LICENSE-2.0 23:16:24 postgres | # 23:16:24 postgres | # Unless required by applicable law or agreed to in writing, software 23:16:24 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:24 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:24 postgres | # See the License for the specific language governing permissions and 23:16:24 postgres | # limitations under the License. 23:16:24 postgres | 23:16:24 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 23:16:24 postgres | CREATE ROLE 23:16:24 postgres | 23:16:24 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | do 23:16:24 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 23:16:24 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 23:16:24 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 23:16:24 postgres | done 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 23:16:24 postgres | GRANT 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 23:16:24 postgres | GRANT 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 23:16:24 postgres | GRANT 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 23:16:24 postgres | GRANT 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 23:16:24 postgres | GRANT 23:16:24 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 23:16:24 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 23:16:24 postgres | CREATE DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 23:16:24 postgres | ALTER DATABASE 23:16:24 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 23:16:24 postgres | GRANT 23:16:24 postgres | 23:16:24 postgres | 2025-06-13 23:12:43.671 UTC [49] LOG: received fast shutdown request 23:16:24 postgres | waiting for server to shut down....2025-06-13 23:12:43.673 UTC [49] LOG: aborting any active transactions 23:16:24 postgres | 2025-06-13 23:12:43.675 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 23:16:24 postgres | 2025-06-13 23:12:43.675 UTC [50] LOG: shutting down 23:16:24 postgres | 2025-06-13 23:12:43.677 UTC [50] LOG: checkpoint starting: shutdown immediate 23:16:24 postgres | 2025-06-13 23:12:44.061 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.303 s, sync=0.075 s, total=0.386 s; sync files=1788, longest=0.008 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 23:16:24 postgres | 2025-06-13 23:12:44.073 UTC [49] LOG: database system is shut down 23:16:24 postgres | done 23:16:24 postgres | server stopped 23:16:24 postgres | 23:16:24 postgres | PostgreSQL init process complete; ready for start up. 23:16:24 postgres | 23:16:24 postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 23:16:24 postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 23:16:24 postgres | 2025-06-13 23:12:44.199 UTC [1] LOG: listening on IPv6 address "::", port 5432 23:16:24 postgres | 2025-06-13 23:12:44.202 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 23:16:24 postgres | 2025-06-13 23:12:44.208 UTC [102] LOG: database system was shut down at 2025-06-13 23:12:44 UTC 23:16:24 postgres | 2025-06-13 23:12:44.215 UTC [1] LOG: database system is ready to accept connections 23:16:24 prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 23:16:24 prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 23:16:24 prometheus | time=2025-06-13T23:12:45.548Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 23:16:24 prometheus | time=2025-06-13T23:12:45.553Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 23:16:24 prometheus | time=2025-06-13T23:12:45.559Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 23:16:24 prometheus | time=2025-06-13T23:12:45.560Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 23:16:24 prometheus | time=2025-06-13T23:12:45.562Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 23:16:24 prometheus | time=2025-06-13T23:12:45.562Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 23:16:24 prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 23:16:24 prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.96µs 23:16:24 prometheus | time=2025-06-13T23:12:45.568Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 23:16:24 prometheus | time=2025-06-13T23:12:45.570Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=933.105µs 23:16:24 prometheus | time=2025-06-13T23:12:45.570Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=192.029µs wal_replay_duration=977.597µs wbl_replay_duration=470ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.96µs total_replay_duration=1.921302ms 23:16:24 prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 23:16:24 prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1290 msg="TSDB started" 23:16:24 prometheus | time=2025-06-13T23:12:45.574Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:24 prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 23:16:24 prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.62µs remote_storage=2.981µs web_handler=920ns query_engine=1.58µs scrape=331.266µs scrape_sd=253.672µs notify=153.597µs notify_sd=32.742µs rules=1.91µs tracing=8.47µs filename=/etc/prometheus/prometheus.yml totalDuration=1.604488ms 23:16:24 prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 23:16:24 prometheus | time=2025-06-13T23:12:45.575Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 23:16:24 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:24 simulator | overriding logback.xml 23:16:24 simulator | 2025-06-13 23:12:42,966 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:24 simulator | 2025-06-13 23:12:43,040 INFO org.onap.policy.models.simulators starting 23:16:24 simulator | 2025-06-13 23:12:43,040 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:24 simulator | 2025-06-13 23:12:43,261 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:24 simulator | 2025-06-13 23:12:43,262 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:24 simulator | 2025-06-13 23:12:43,500 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START 23:16:24 simulator | 2025-06-13 23:12:43,513 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 23:16:24 simulator | 2025-06-13 23:12:43,515 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@f0e995e{STOPPED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 23:16:24 simulator | 2025-06-13 23:12:43,524 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 23:16:24 simulator | 2025-06-13 23:12:43,585 INFO Session workerName=node0 23:16:24 simulator | 2025-06-13 23:12:43,606 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,283 INFO Using GSON for REST calls 23:16:24 simulator | 2025-06-13 23:12:44,348 INFO Started oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,358 INFO Started A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:24 simulator | 2025-06-13 23:12:44,359 INFO Started oejs.Server@30f5a68a{STARTING}[12.0.21,sto=0] @1922ms 23:16:24 simulator | 2025-06-13 23:12:44,359 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@30f5a68a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@24d4d7c9{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@f0e995e{STARTED}}, connector=A&AI simulator@4c37b5b{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-13c3c1e1==org.glassfish.jersey.servlet.ServletContainer@f0de2a7a{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4156 ms. 23:16:24 simulator | 2025-06-13 23:12:44,370 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:24 simulator | 2025-06-13 23:12:44,378 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START 23:16:24 simulator | 2025-06-13 23:12:44,378 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 23:16:24 simulator | 2025-06-13 23:12:44,388 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@15eebbff{STOPPED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 23:16:24 simulator | 2025-06-13 23:12:44,391 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 23:16:24 simulator | 2025-06-13 23:12:44,413 INFO Session workerName=node0 23:16:24 simulator | 2025-06-13 23:12:44,416 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,477 INFO Using GSON for REST calls 23:16:24 simulator | 2025-06-13 23:12:44,488 INFO Started oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,490 INFO Started SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:24 simulator | 2025-06-13 23:12:44,490 INFO Started oejs.Server@4baf352a{STARTING}[12.0.21,sto=0] @2053ms 23:16:24 simulator | 2025-06-13 23:12:44,490 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@4baf352a{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@1bb1fde8{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@15eebbff{STARTED}}, connector=SDNC simulator@22d6f11{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3829ac1==org.glassfish.jersey.servlet.ServletContainer@5ea3d315{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4898 ms. 23:16:24 simulator | 2025-06-13 23:12:44,492 INFO org.onap.policy.models.simulators starting SO simulator 23:16:24 simulator | 2025-06-13 23:12:44,498 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START 23:16:24 simulator | 2025-06-13 23:12:44,499 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 23:16:24 simulator | 2025-06-13 23:12:44,501 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@3e34ace1{STOPPED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 23:16:24 simulator | 2025-06-13 23:12:44,501 INFO jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 23:16:24 simulator | 2025-06-13 23:12:44,504 INFO Session workerName=node0 23:16:24 simulator | 2025-06-13 23:12:44,505 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,558 INFO Using GSON for REST calls 23:16:24 simulator | 2025-06-13 23:12:44,570 INFO Started oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}} 23:16:24 simulator | 2025-06-13 23:12:44,571 INFO Started SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:24 simulator | 2025-06-13 23:12:44,571 INFO Started oejs.Server@553f1d75{STARTING}[12.0.21,sto=0] @2134ms 23:16:24 simulator | 2025-06-13 23:12:44,572 INFO JettyJerseyServer [JerseyServlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=oejs.Server@553f1d75{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@6e1d8f9e{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@3e34ace1{STARTED}}, connector=SO simulator@62fe6067{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2dbe250d==org.glassfish.jersey.servlet.ServletContainer@79c62ffa{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 4928 ms. 23:16:24 simulator | 2025-06-13 23:12:44,573 INFO org.onap.policy.models.simulators started 23:16:25 zookeeper | ===> User 23:16:25 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:25 zookeeper | ===> Configuring ... 23:16:25 zookeeper | ===> Running preflight checks ... 23:16:25 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:25 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:25 zookeeper | ===> Launching ... 23:16:25 zookeeper | ===> Launching zookeeper ... 23:16:25 zookeeper | [2025-06-13 23:12:46,744] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,747] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,747] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,747] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,747] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,748] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:25 zookeeper | [2025-06-13 23:12:46,749] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:25 zookeeper | [2025-06-13 23:12:46,749] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:25 zookeeper | [2025-06-13 23:12:46,749] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:25 zookeeper | [2025-06-13 23:12:46,750] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:25 zookeeper | [2025-06-13 23:12:46,761] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 23:16:25 zookeeper | [2025-06-13 23:12:46,764] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:25 zookeeper | [2025-06-13 23:12:46,764] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:25 zookeeper | [2025-06-13 23:12:46,766] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:25 zookeeper | [2025-06-13 23:12:46,773] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,773] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,773] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,775] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,776] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,777] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:25 zookeeper | [2025-06-13 23:12:46,777] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,777] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,778] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:25 zookeeper | [2025-06-13 23:12:46,778] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,779] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:25 zookeeper | [2025-06-13 23:12:46,781] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,781] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,782] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:25 zookeeper | [2025-06-13 23:12:46,782] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:25 zookeeper | [2025-06-13 23:12:46,782] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,808] INFO Logging initialized @422ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:25 zookeeper | [2025-06-13 23:12:46,864] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:25 zookeeper | [2025-06-13 23:12:46,864] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:25 zookeeper | [2025-06-13 23:12:46,885] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 23:16:25 zookeeper | [2025-06-13 23:12:46,919] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:25 zookeeper | [2025-06-13 23:12:46,919] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:25 zookeeper | [2025-06-13 23:12:46,920] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:25 zookeeper | [2025-06-13 23:12:46,923] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:25 zookeeper | [2025-06-13 23:12:46,932] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:25 zookeeper | [2025-06-13 23:12:46,943] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:25 zookeeper | [2025-06-13 23:12:46,944] INFO Started @563ms (org.eclipse.jetty.server.Server) 23:16:25 zookeeper | [2025-06-13 23:12:46,944] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:25 zookeeper | [2025-06-13 23:12:46,948] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,949] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,950] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,954] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,977] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,978] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:25 zookeeper | [2025-06-13 23:12:46,978] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:25 zookeeper | [2025-06-13 23:12:46,978] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:25 zookeeper | [2025-06-13 23:12:46,989] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:25 zookeeper | [2025-06-13 23:12:46,989] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:25 zookeeper | [2025-06-13 23:12:46,995] INFO Snapshot loaded in 16 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:25 zookeeper | [2025-06-13 23:12:46,996] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:25 zookeeper | [2025-06-13 23:12:46,997] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:25 zookeeper | [2025-06-13 23:12:47,011] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:25 zookeeper | [2025-06-13 23:12:47,011] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:25 zookeeper | [2025-06-13 23:12:47,034] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:25 zookeeper | [2025-06-13 23:12:47,034] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:25 zookeeper | [2025-06-13 23:12:48,183] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:25 Tearing down containers... 23:16:25 Container grafana Stopping 23:16:25 Container policy-csit Stopping 23:16:25 Container policy-apex-pdp Stopping 23:16:25 Container policy-csit Stopped 23:16:25 Container policy-csit Removing 23:16:25 Container policy-csit Removed 23:16:25 Container grafana Stopped 23:16:25 Container grafana Removing 23:16:25 Container grafana Removed 23:16:25 Container prometheus Stopping 23:16:25 Container prometheus Stopped 23:16:25 Container prometheus Removing 23:16:25 Container prometheus Removed 23:16:35 Container policy-apex-pdp Stopped 23:16:35 Container policy-apex-pdp Removing 23:16:35 Container policy-apex-pdp Removed 23:16:35 Container policy-pap Stopping 23:16:35 Container simulator Stopping 23:16:45 Container simulator Stopped 23:16:45 Container simulator Removing 23:16:45 Container simulator Removed 23:16:45 Container policy-pap Stopped 23:16:45 Container policy-pap Removing 23:16:45 Container policy-pap Removed 23:16:45 Container kafka Stopping 23:16:45 Container policy-api Stopping 23:16:46 Container kafka Stopped 23:16:46 Container kafka Removing 23:16:46 Container kafka Removed 23:16:46 Container zookeeper Stopping 23:16:47 Container zookeeper Stopped 23:16:47 Container zookeeper Removing 23:16:47 Container zookeeper Removed 23:16:56 Container policy-api Stopped 23:16:56 Container policy-api Removing 23:16:56 Container policy-api Removed 23:16:56 Container policy-db-migrator Stopping 23:16:56 Container policy-db-migrator Stopped 23:16:56 Container policy-db-migrator Removing 23:16:56 Container policy-db-migrator Removed 23:16:56 Container postgres Stopping 23:16:56 Container postgres Stopped 23:16:56 Container postgres Removing 23:16:56 Container postgres Removed 23:16:56 Network compose_default Removing 23:16:56 Network compose_default Removed 23:16:56 $ ssh-agent -k 23:16:56 unset SSH_AUTH_SOCK; 23:16:56 unset SSH_AGENT_PID; 23:16:56 echo Agent pid 2071 killed; 23:16:56 [ssh-agent] Stopped. 23:16:56 Robot results publisher started... 23:16:56 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:16:56 -Parsing output xml: 23:16:57 Done! 23:16:57 -Copying log files to build dir: 23:16:57 Done! 23:16:57 -Assigning results to build: 23:16:57 Done! 23:16:57 -Checking thresholds: 23:16:57 Done! 23:16:57 Done publishing Robot results. 23:16:57 [PostBuildScript] - [INFO] Executing post build scripts. 23:16:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2130498470068225806.sh 23:16:57 ---> sysstat.sh 23:16:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11177215652846351191.sh 23:16:57 ---> package-listing.sh 23:16:57 ++ facter osfamily 23:16:57 ++ tr '[:upper:]' '[:lower:]' 23:16:58 + OS_FAMILY=debian 23:16:58 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:16:58 + START_PACKAGES=/tmp/packages_start.txt 23:16:58 + END_PACKAGES=/tmp/packages_end.txt 23:16:58 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:16:58 + PACKAGES=/tmp/packages_start.txt 23:16:58 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:58 + PACKAGES=/tmp/packages_end.txt 23:16:58 + case "${OS_FAMILY}" in 23:16:58 + dpkg -l 23:16:58 + grep '^ii' 23:16:58 + '[' -f /tmp/packages_start.txt ']' 23:16:58 + '[' -f /tmp/packages_end.txt ']' 23:16:58 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:16:58 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:58 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:58 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:58 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4698450907503617039.sh 23:16:58 ---> capture-instance-metadata.sh 23:16:58 Setup pyenv: 23:16:58 system 23:16:58 3.8.13 23:16:58 3.9.13 23:16:58 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:16:58 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv 23:17:00 lf-activate-venv(): INFO: Installing: lftools 23:17:08 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH 23:17:08 INFO: Running in OpenStack, capturing instance metadata 23:17:08 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14460945584074144555.sh 23:17:08 provisioning config files... 23:17:08 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9487861863639614318tmp 23:17:08 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:08 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:08 [EnvInject] - Injecting environment variables from a build step. 23:17:08 [EnvInject] - Injecting as environment variables the properties content 23:17:08 SERVER_ID=logs 23:17:08 23:17:08 [EnvInject] - Variables injected successfully. 23:17:08 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7287300876831309683.sh 23:17:08 ---> create-netrc.sh 23:17:08 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15432271917607787771.sh 23:17:08 ---> python-tools-install.sh 23:17:08 Setup pyenv: 23:17:08 system 23:17:08 3.8.13 23:17:08 3.9.13 23:17:08 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv 23:17:10 lf-activate-venv(): INFO: Installing: lftools 23:17:18 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH 23:17:18 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3882028246046477712.sh 23:17:18 ---> sudo-logs.sh 23:17:18 Archiving 'sudo' log.. 23:17:19 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13413713206687790454.sh 23:17:19 ---> job-cost.sh 23:17:19 Setup pyenv: 23:17:19 system 23:17:19 3.8.13 23:17:19 3.9.13 23:17:19 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:19 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv 23:17:21 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:25 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH 23:17:25 INFO: No Stack... 23:17:25 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:26 INFO: Archiving Costs 23:17:26 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins5740275028267789400.sh 23:17:26 ---> logs-deploy.sh 23:17:26 Setup pyenv: 23:17:26 system 23:17:26 3.8.13 23:17:26 3.9.13 23:17:26 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:26 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-1fkA from file:/tmp/.os_lf_venv 23:17:28 lf-activate-venv(): INFO: Installing: lftools 23:17:36 lf-activate-venv(): INFO: Adding /tmp/venv-1fkA/bin to PATH 23:17:36 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/2101 23:17:36 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:37 Archives upload complete. 23:17:37 INFO: archiving logs to Nexus 23:17:38 ---> uname -a: 23:17:38 Linux prd-ubuntu1804-docker-8c-8g-20975 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:38 23:17:38 23:17:38 ---> lscpu: 23:17:38 Architecture: x86_64 23:17:38 CPU op-mode(s): 32-bit, 64-bit 23:17:38 Byte Order: Little Endian 23:17:38 CPU(s): 8 23:17:38 On-line CPU(s) list: 0-7 23:17:38 Thread(s) per core: 1 23:17:38 Core(s) per socket: 1 23:17:38 Socket(s): 8 23:17:38 NUMA node(s): 1 23:17:38 Vendor ID: AuthenticAMD 23:17:38 CPU family: 23 23:17:38 Model: 49 23:17:38 Model name: AMD EPYC-Rome Processor 23:17:38 Stepping: 0 23:17:38 CPU MHz: 2800.000 23:17:38 BogoMIPS: 5600.00 23:17:38 Virtualization: AMD-V 23:17:38 Hypervisor vendor: KVM 23:17:38 Virtualization type: full 23:17:38 L1d cache: 32K 23:17:38 L1i cache: 32K 23:17:38 L2 cache: 512K 23:17:38 L3 cache: 16384K 23:17:38 NUMA node0 CPU(s): 0-7 23:17:38 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:38 23:17:38 23:17:38 ---> nproc: 23:17:38 8 23:17:38 23:17:38 23:17:38 ---> df -h: 23:17:38 Filesystem Size Used Avail Use% Mounted on 23:17:38 udev 16G 0 16G 0% /dev 23:17:38 tmpfs 3.2G 708K 3.2G 1% /run 23:17:38 /dev/vda1 155G 16G 140G 10% / 23:17:38 tmpfs 16G 0 16G 0% /dev/shm 23:17:38 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:38 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:38 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:38 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:38 23:17:38 23:17:38 ---> free -m: 23:17:38 total used free shared buff/cache available 23:17:38 Mem: 32167 884 23269 0 8013 30827 23:17:38 Swap: 1023 0 1023 23:17:38 23:17:38 23:17:38 ---> ip addr: 23:17:38 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:38 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:38 inet 127.0.0.1/8 scope host lo 23:17:38 valid_lft forever preferred_lft forever 23:17:38 inet6 ::1/128 scope host 23:17:38 valid_lft forever preferred_lft forever 23:17:38 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:38 link/ether fa:16:3e:b8:4b:66 brd ff:ff:ff:ff:ff:ff 23:17:38 inet 10.30.107.209/23 brd 10.30.107.255 scope global dynamic ens3 23:17:38 valid_lft 85959sec preferred_lft 85959sec 23:17:38 inet6 fe80::f816:3eff:feb8:4b66/64 scope link 23:17:38 valid_lft forever preferred_lft forever 23:17:38 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:38 link/ether 02:42:ce:3c:ac:97 brd ff:ff:ff:ff:ff:ff 23:17:38 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:38 valid_lft forever preferred_lft forever 23:17:38 inet6 fe80::42:ceff:fe3c:ac97/64 scope link 23:17:38 valid_lft forever preferred_lft forever 23:17:38 23:17:38 23:17:38 ---> sar -b -r -n DEV: 23:17:38 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20975) 06/13/25 _x86_64_ (8 CPU) 23:17:38 23:17:38 23:10:20 LINUX RESTART (8 CPU) 23:17:38 23:17:38 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:38 23:12:01 191.29 26.09 165.19 2396.00 92980.74 23:17:38 23:13:01 691.12 5.15 685.97 456.32 244251.02 23:17:38 23:14:01 16.93 0.10 16.83 14.00 3750.84 23:17:38 23:15:01 218.90 0.33 218.56 30.79 33842.63 23:17:38 23:16:01 8.70 0.02 8.68 0.13 208.63 23:17:38 23:17:01 61.48 0.82 60.66 42.12 1130.82 23:17:38 Average: 198.06 5.42 192.65 489.94 62693.25 23:17:38 23:17:38 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:38 23:12:01 27657780 31544436 5281440 16.03 86964 4067916 2382368 7.01 1043252 3847588 2254372 23:17:38 23:13:01 23201416 30556152 9737804 29.56 164144 7255988 7276116 21.41 2261716 6805352 128 23:17:38 23:14:01 22157252 29620224 10781968 32.73 165772 7360904 8538240 25.12 3260176 6833504 212 23:17:38 23:15:01 21531320 29524872 11407900 34.63 206540 7798068 8877048 26.12 3447716 7211404 2096 23:17:38 23:16:01 21494148 29488896 11445072 34.75 206700 7798904 8959740 26.36 3486920 7208892 152 23:17:38 23:17:01 23862220 31590168 9077000 27.56 207488 7528452 1632200 4.80 1447256 6963152 11052 23:17:38 Average: 23317356 30387458 9621864 29.21 172935 6968372 6277619 18.47 2491173 6478315 378002 23:17:38 23:17:38 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:38 23:12:01 ens3 992.67 613.51 23974.53 52.56 0.00 0.00 0.00 0.00 23:17:38 23:12:01 lo 12.26 12.26 1.16 1.16 0.00 0.00 0.00 0.00 23:17:38 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:38 23:13:01 veth7433cdb 0.38 0.52 0.02 0.03 0.00 0.00 0.00 0.00 23:17:38 23:13:01 ens3 639.06 360.64 19724.04 30.00 0.00 0.00 0.00 0.00 23:17:38 23:13:01 veth139f22f 0.48 0.67 0.03 0.04 0.00 0.00 0.00 0.00 23:17:38 23:13:01 veth45d5c1c 40.66 49.26 3.11 311.18 0.00 0.00 0.00 0.03 23:17:38 23:14:01 veth7433cdb 3.68 4.67 0.59 0.47 0.00 0.00 0.00 0.00 23:17:38 23:14:01 ens3 7.58 3.65 6.10 0.91 0.00 0.00 0.00 0.00 23:17:38 23:14:01 veth139f22f 10.70 11.66 2.20 1.51 0.00 0.00 0.00 0.00 23:17:38 23:14:01 veth45d5c1c 0.28 0.45 0.02 0.03 0.00 0.00 0.00 0.00 23:17:38 23:15:01 vethacb633e 0.65 0.70 1.50 0.85 0.00 0.00 0.00 0.00 23:17:38 23:15:01 veth7433cdb 3.25 4.78 0.53 0.37 0.00 0.00 0.00 0.00 23:17:38 23:15:01 ens3 236.73 170.85 2207.93 13.48 0.00 0.00 0.00 0.00 23:17:38 23:15:01 veth139f22f 6.48 9.38 1.50 0.73 0.00 0.00 0.00 0.00 23:17:38 23:16:01 vethacb633e 1.70 1.43 0.23 1.01 0.00 0.00 0.00 0.00 23:17:38 23:16:01 veth7433cdb 3.28 4.73 0.54 0.37 0.00 0.00 0.00 0.00 23:17:38 23:16:01 ens3 1.73 1.55 0.39 0.53 0.00 0.00 0.00 0.00 23:17:38 23:16:01 veth139f22f 158.37 160.32 19.63 38.22 0.00 0.00 0.00 0.00 23:17:38 23:17:01 ens3 61.46 40.49 64.86 27.62 0.00 0.00 0.00 0.00 23:17:38 23:17:01 lo 27.89 27.89 2.51 2.51 0.00 0.00 0.00 0.00 23:17:38 23:17:01 docker0 135.80 189.79 8.55 1359.72 0.00 0.00 0.00 0.00 23:17:38 Average: ens3 323.22 198.46 7663.22 20.85 0.00 0.00 0.00 0.00 23:17:38 Average: lo 3.99 3.99 0.36 0.36 0.00 0.00 0.00 0.00 23:17:38 Average: docker0 22.64 31.63 1.43 226.65 0.00 0.00 0.00 0.00 23:17:38 23:17:38 23:17:38 ---> sar -P ALL: 23:17:38 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20975) 06/13/25 _x86_64_ (8 CPU) 23:17:38 23:17:38 23:10:20 LINUX RESTART (8 CPU) 23:17:38 23:17:38 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:38 23:12:01 all 12.88 0.00 2.55 3.26 0.04 81.26 23:17:38 23:12:01 0 24.67 0.00 2.92 1.00 0.05 71.35 23:17:38 23:12:01 1 10.97 0.00 2.09 0.43 0.03 86.47 23:17:38 23:12:01 2 21.00 0.00 2.77 3.32 0.07 72.85 23:17:38 23:12:01 3 5.27 0.00 1.91 0.22 0.03 92.58 23:17:38 23:12:01 4 4.36 0.00 2.78 16.93 0.03 75.90 23:17:38 23:12:01 5 11.88 0.00 2.40 3.62 0.05 82.06 23:17:38 23:12:01 6 10.63 0.00 2.72 0.10 0.05 86.50 23:17:38 23:12:01 7 14.21 0.00 2.79 0.50 0.02 82.48 23:17:38 23:13:01 all 23.33 0.00 8.33 10.30 0.07 57.96 23:17:38 23:13:01 0 21.75 0.00 9.18 37.68 0.08 31.31 23:17:38 23:13:01 1 23.01 0.00 8.27 1.77 0.07 66.89 23:17:38 23:13:01 2 26.20 0.00 8.89 4.03 0.08 60.80 23:17:38 23:13:01 3 24.75 0.00 7.49 12.70 0.05 55.01 23:17:38 23:13:01 4 23.12 0.00 7.19 17.95 0.08 51.65 23:17:38 23:13:01 5 23.69 0.00 7.42 3.73 0.07 65.09 23:17:38 23:13:01 6 23.87 0.00 8.08 2.70 0.08 65.26 23:17:38 23:13:01 7 20.19 0.00 10.18 2.05 0.07 67.52 23:17:38 23:14:01 all 18.74 0.00 1.51 0.16 0.05 79.54 23:17:38 23:14:01 0 23.57 0.00 1.62 0.00 0.07 74.75 23:17:38 23:14:01 1 20.19 0.00 1.53 0.00 0.05 78.24 23:17:38 23:14:01 2 19.14 0.00 1.75 0.05 0.07 78.99 23:17:38 23:14:01 3 16.16 0.00 1.11 0.02 0.07 82.65 23:17:38 23:14:01 4 18.27 0.00 1.62 0.00 0.03 80.07 23:17:38 23:14:01 5 20.77 0.00 1.52 0.00 0.07 77.65 23:17:38 23:14:01 6 17.07 0.00 1.51 1.19 0.05 80.18 23:17:38 23:14:01 7 14.72 0.00 1.44 0.03 0.07 83.74 23:17:38 23:15:01 all 8.95 0.00 2.42 1.42 0.06 87.16 23:17:38 23:15:01 0 11.14 0.00 2.66 3.42 0.07 82.72 23:17:38 23:15:01 1 6.80 0.00 1.94 0.50 0.05 90.71 23:17:38 23:15:01 2 9.47 0.00 3.44 0.44 0.05 86.60 23:17:38 23:15:01 3 10.31 0.00 2.21 1.16 0.07 86.26 23:17:38 23:15:01 4 6.18 0.00 2.08 0.94 0.07 90.73 23:17:38 23:15:01 5 8.31 0.00 2.59 0.17 0.05 88.88 23:17:38 23:15:01 6 4.55 0.00 1.61 3.03 0.05 90.76 23:17:38 23:15:01 7 14.80 0.00 2.77 1.71 0.05 80.67 23:17:38 23:16:01 all 4.03 0.00 0.37 0.03 0.04 95.52 23:17:38 23:16:01 0 5.54 0.00 0.40 0.00 0.05 94.01 23:17:38 23:16:01 1 4.39 0.00 0.33 0.00 0.03 95.25 23:17:38 23:16:01 2 3.07 0.00 0.47 0.02 0.05 96.39 23:17:38 23:16:01 3 3.41 0.00 0.33 0.13 0.05 96.07 23:17:38 23:16:01 4 5.26 0.00 0.33 0.02 0.02 94.38 23:17:38 23:16:01 5 3.61 0.00 0.38 0.00 0.07 95.94 23:17:38 23:16:01 6 4.11 0.00 0.38 0.02 0.03 95.46 23:17:38 23:16:01 7 2.89 0.00 0.33 0.00 0.03 96.75 23:17:38 23:17:01 all 2.35 0.00 0.74 0.10 0.03 96.77 23:17:38 23:17:01 0 2.30 0.00 0.75 0.23 0.03 96.69 23:17:38 23:17:01 1 2.45 0.00 0.70 0.05 0.03 96.77 23:17:38 23:17:01 2 1.50 0.00 0.69 0.15 0.03 97.63 23:17:38 23:17:01 3 1.52 0.00 0.72 0.07 0.03 97.66 23:17:38 23:17:01 4 1.82 0.00 0.69 0.03 0.03 97.42 23:17:38 23:17:01 5 1.79 0.00 0.75 0.20 0.03 97.23 23:17:38 23:17:01 6 3.34 0.00 0.68 0.03 0.03 95.91 23:17:38 23:17:01 7 4.08 0.00 1.00 0.07 0.02 94.84 23:17:38 Average: all 11.68 0.00 2.64 2.53 0.05 83.10 23:17:38 Average: 0 14.80 0.00 2.90 6.97 0.06 75.27 23:17:38 Average: 1 11.25 0.00 2.46 0.46 0.04 85.79 23:17:38 Average: 2 13.37 0.00 2.99 1.33 0.06 82.26 23:17:38 Average: 3 10.21 0.00 2.29 2.37 0.05 85.09 23:17:38 Average: 4 9.81 0.00 2.44 5.95 0.04 81.76 23:17:38 Average: 5 11.65 0.00 2.50 1.28 0.06 84.51 23:17:38 Average: 6 10.57 0.00 2.49 1.18 0.05 85.72 23:17:38 Average: 7 11.79 0.00 3.07 0.72 0.04 84.37 23:17:38 23:17:38 23:17:38