07:43:52 Started by timer 07:43:52 Running as SYSTEM 07:43:52 [EnvInject] - Loading node environment variables. 07:43:52 Building remotely on prd-ubuntu1804-docker-8c-8g-21536 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp 07:43:52 [ssh-agent] Looking for ssh-agent implementation... 07:43:52 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 07:43:52 $ ssh-agent 07:43:52 SSH_AUTH_SOCK=/tmp/ssh-7NvzMKjZmkkw/agent.2033 07:43:52 SSH_AGENT_PID=2035 07:43:52 [ssh-agent] Started. 07:43:52 Running ssh-add (command line suppressed) 07:43:52 Identity added: /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_16631299516105113715.key (/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/private_key_16631299516105113715.key) 07:43:52 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 07:43:52 The recommended git tool is: NONE 07:43:54 using credential onap-jenkins-ssh 07:43:54 Wiping out workspace first. 07:43:54 Cloning the remote Git repository 07:43:54 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 07:43:54 > git init /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp # timeout=10 07:43:54 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 07:43:54 > git --version # timeout=10 07:43:54 > git --version # 'git version 2.17.1' 07:43:54 using GIT_SSH to set credentials Gerrit user 07:43:54 Verifying host key using manually-configured host key entries 07:43:54 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 07:43:54 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 07:43:54 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 07:43:55 Avoid second fetch 07:43:55 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 07:43:55 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/remotes/origin/master) 07:43:55 > git config core.sparsecheckout # timeout=10 07:43:55 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 07:43:55 Commit message: "Add Fix fail handling in ACM runtime in CSIT" 07:43:55 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 07:43:58 provisioning config files... 07:43:58 copy managed file [npmrc] to file:/home/jenkins/.npmrc 07:43:58 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 07:43:58 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins15569588592646798711.sh 07:43:58 ---> python-tools-install.sh 07:43:58 Setup pyenv: 07:43:58 * system (set by /opt/pyenv/version) 07:43:58 * 3.8.13 (set by /opt/pyenv/version) 07:43:58 * 3.9.13 (set by /opt/pyenv/version) 07:43:58 * 3.10.6 (set by /opt/pyenv/version) 07:44:03 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-cXar 07:44:03 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 07:44:07 lf-activate-venv(): INFO: Installing: lftools 07:44:30 lf-activate-venv(): INFO: Adding /tmp/venv-cXar/bin to PATH 07:44:30 Generating Requirements File 07:44:49 Python 3.10.6 07:44:49 pip 25.1.1 from /tmp/venv-cXar/lib/python3.10/site-packages/pip (python 3.10) 07:44:49 appdirs==1.4.4 07:44:49 argcomplete==3.6.2 07:44:49 aspy.yaml==1.3.0 07:44:49 attrs==25.3.0 07:44:49 autopage==0.5.2 07:44:49 beautifulsoup4==4.13.4 07:44:49 boto3==1.38.36 07:44:49 botocore==1.38.36 07:44:49 bs4==0.0.2 07:44:49 cachetools==5.5.2 07:44:49 certifi==2025.6.15 07:44:49 cffi==1.17.1 07:44:49 cfgv==3.4.0 07:44:49 chardet==5.2.0 07:44:49 charset-normalizer==3.4.2 07:44:49 click==8.2.1 07:44:49 cliff==4.10.0 07:44:49 cmd2==2.6.1 07:44:49 cryptography==3.3.2 07:44:49 debtcollector==3.0.0 07:44:49 decorator==5.2.1 07:44:49 defusedxml==0.7.1 07:44:49 Deprecated==1.2.18 07:44:49 distlib==0.3.9 07:44:49 dnspython==2.7.0 07:44:49 docker==7.1.0 07:44:49 dogpile.cache==1.4.0 07:44:49 durationpy==0.10 07:44:49 email_validator==2.2.0 07:44:49 filelock==3.18.0 07:44:49 future==1.0.0 07:44:49 gitdb==4.0.12 07:44:49 GitPython==3.1.44 07:44:49 google-auth==2.40.3 07:44:49 httplib2==0.22.0 07:44:49 identify==2.6.12 07:44:49 idna==3.10 07:44:49 importlib-resources==1.5.0 07:44:49 iso8601==2.1.0 07:44:49 Jinja2==3.1.6 07:44:49 jmespath==1.0.1 07:44:49 jsonpatch==1.33 07:44:49 jsonpointer==3.0.0 07:44:49 jsonschema==4.24.0 07:44:49 jsonschema-specifications==2025.4.1 07:44:49 keystoneauth1==5.11.1 07:44:49 kubernetes==33.1.0 07:44:49 lftools==0.37.13 07:44:49 lxml==5.4.0 07:44:49 MarkupSafe==3.0.2 07:44:49 msgpack==1.1.1 07:44:49 multi_key_dict==2.0.3 07:44:49 munch==4.0.0 07:44:49 netaddr==1.3.0 07:44:49 niet==1.4.2 07:44:49 nodeenv==1.9.1 07:44:49 oauth2client==4.1.3 07:44:49 oauthlib==3.2.2 07:44:49 openstacksdk==4.6.0 07:44:49 os-client-config==2.1.0 07:44:49 os-service-types==1.7.0 07:44:49 osc-lib==4.0.2 07:44:49 oslo.config==9.8.0 07:44:49 oslo.context==6.0.0 07:44:49 oslo.i18n==6.5.1 07:44:49 oslo.log==7.1.0 07:44:49 oslo.serialization==5.7.0 07:44:49 oslo.utils==9.0.0 07:44:49 packaging==25.0 07:44:49 pbr==6.1.1 07:44:49 platformdirs==4.3.8 07:44:49 prettytable==3.16.0 07:44:49 psutil==7.0.0 07:44:49 pyasn1==0.6.1 07:44:49 pyasn1_modules==0.4.2 07:44:49 pycparser==2.22 07:44:49 pygerrit2==2.0.15 07:44:49 PyGithub==2.6.1 07:44:49 PyJWT==2.10.1 07:44:49 PyNaCl==1.5.0 07:44:49 pyparsing==2.4.7 07:44:49 pyperclip==1.9.0 07:44:49 pyrsistent==0.20.0 07:44:49 python-cinderclient==9.7.0 07:44:49 python-dateutil==2.9.0.post0 07:44:49 python-heatclient==4.2.0 07:44:49 python-jenkins==1.8.2 07:44:49 python-keystoneclient==5.6.0 07:44:49 python-magnumclient==4.8.1 07:44:49 python-openstackclient==8.1.0 07:44:49 python-swiftclient==4.8.0 07:44:49 PyYAML==6.0.2 07:44:49 referencing==0.36.2 07:44:49 requests==2.32.4 07:44:49 requests-oauthlib==2.0.0 07:44:49 requestsexceptions==1.4.0 07:44:49 rfc3986==2.0.0 07:44:49 rpds-py==0.25.1 07:44:49 rsa==4.9.1 07:44:49 ruamel.yaml==0.18.14 07:44:49 ruamel.yaml.clib==0.2.12 07:44:49 s3transfer==0.13.0 07:44:49 simplejson==3.20.1 07:44:49 six==1.17.0 07:44:49 smmap==5.0.2 07:44:49 soupsieve==2.7 07:44:49 stevedore==5.4.1 07:44:49 tabulate==0.9.0 07:44:49 toml==0.10.2 07:44:49 tomlkit==0.13.3 07:44:49 tqdm==4.67.1 07:44:49 typing_extensions==4.14.0 07:44:49 tzdata==2025.2 07:44:49 urllib3==1.26.20 07:44:49 virtualenv==20.31.2 07:44:49 wcwidth==0.2.13 07:44:49 websocket-client==1.8.0 07:44:49 wrapt==1.17.2 07:44:49 xdg==6.0.0 07:44:49 xmltodict==0.14.2 07:44:49 yq==3.4.3 07:44:49 [EnvInject] - Injecting environment variables from a build step. 07:44:49 [EnvInject] - Injecting as environment variables the properties content 07:44:49 SET_JDK_VERSION=openjdk17 07:44:49 GIT_URL="git://cloud.onap.org/mirror" 07:44:49 07:44:49 [EnvInject] - Variables injected successfully. 07:44:49 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh /tmp/jenkins12522056457172994512.sh 07:44:49 ---> update-java-alternatives.sh 07:44:49 ---> Updating Java version 07:44:49 ---> Ubuntu/Debian system detected 07:44:49 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 07:44:49 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 07:44:49 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 07:44:50 openjdk version "17.0.4" 2022-07-19 07:44:50 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 07:44:50 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 07:44:50 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 07:44:50 [EnvInject] - Injecting environment variables from a build step. 07:44:50 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 07:44:50 [EnvInject] - Variables injected successfully. 07:44:50 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/sh -xe /tmp/jenkins10943450673702516246.sh 07:44:50 + /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/run-project-csit.sh drools-pdp 07:44:50 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 07:44:50 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 07:44:50 Configure a credential helper to remove this warning. See 07:44:50 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 07:44:50 07:44:50 Login Succeeded 07:44:50 docker: 'compose' is not a docker command. 07:44:50 See 'docker --help' 07:44:50 Docker Compose Plugin not installed. Installing now... 07:44:50 % Total % Received % Xferd Average Speed Time Time Time Current 07:44:50 Dload Upload Total Spent Left Speed 07:44:51 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 07:44:51 47 60.2M 47 28.4M 0 0 43.3M 0 0:00:01 --:--:-- 0:00:01 43.3M 100 60.2M 100 60.2M 0 0 62.7M 0 --:--:-- --:--:-- --:--:-- 104M 07:44:51 Setting project configuration for: drools-pdp 07:44:51 Configuring docker compose... 07:44:53 Starting drools-pdp using postgres + Grafana/Prometheus 07:44:53 policy-db-migrator Pulling 07:44:53 prometheus Pulling 07:44:53 postgres Pulling 07:44:53 grafana Pulling 07:44:53 zookeeper Pulling 07:44:53 kafka Pulling 07:44:53 pap Pulling 07:44:53 drools-pdp Pulling 07:44:53 api Pulling 07:44:53 da9db072f522 Pulling fs layer 07:44:53 96e38c8865ba Pulling fs layer 07:44:53 5e06c6bed798 Pulling fs layer 07:44:53 684be6598fc9 Pulling fs layer 07:44:53 0d92cad902ba Pulling fs layer 07:44:53 dcc0c3b2850c Pulling fs layer 07:44:53 eb7cda286a15 Pulling fs layer 07:44:53 0d92cad902ba Waiting 07:44:53 dcc0c3b2850c Waiting 07:44:53 eb7cda286a15 Waiting 07:44:53 684be6598fc9 Waiting 07:44:53 da9db072f522 Pulling fs layer 07:44:53 4ba79830ebce Pulling fs layer 07:44:53 d223479d7367 Pulling fs layer 07:44:53 7ce9630189bb Pulling fs layer 07:44:53 2d7f854c01cf Pulling fs layer 07:44:53 8e665a4a2af9 Pulling fs layer 07:44:53 219d845251ba Pulling fs layer 07:44:53 7ce9630189bb Waiting 07:44:53 4ba79830ebce Waiting 07:44:53 2d7f854c01cf Waiting 07:44:53 d223479d7367 Waiting 07:44:53 8e665a4a2af9 Waiting 07:44:53 219d845251ba Waiting 07:44:53 da9db072f522 Pulling fs layer 07:44:53 96e38c8865ba Pulling fs layer 07:44:53 e5d7009d9e55 Pulling fs layer 07:44:53 1ec5fb03eaee Pulling fs layer 07:44:53 d3165a332ae3 Pulling fs layer 07:44:53 c124ba1a8b26 Pulling fs layer 07:44:53 6394804c2196 Pulling fs layer 07:44:53 e5d7009d9e55 Waiting 07:44:53 1ec5fb03eaee Waiting 07:44:53 d3165a332ae3 Waiting 07:44:53 c124ba1a8b26 Waiting 07:44:53 6394804c2196 Waiting 07:44:53 f18232174bc9 Pulling fs layer 07:44:53 e60d9caeb0b8 Pulling fs layer 07:44:53 f61a19743345 Pulling fs layer 07:44:53 8af57d8c9f49 Pulling fs layer 07:44:53 c53a11b7c6fc Pulling fs layer 07:44:53 e032d0a5e409 Pulling fs layer 07:44:53 c49e0ee60bfb Pulling fs layer 07:44:53 384497dbce3b Pulling fs layer 07:44:53 055b9255fa03 Pulling fs layer 07:44:53 b176d7edde70 Pulling fs layer 07:44:53 f18232174bc9 Waiting 07:44:53 e60d9caeb0b8 Waiting 07:44:53 f61a19743345 Waiting 07:44:53 8af57d8c9f49 Waiting 07:44:53 c53a11b7c6fc Waiting 07:44:53 e032d0a5e409 Waiting 07:44:53 c49e0ee60bfb Waiting 07:44:53 384497dbce3b Waiting 07:44:53 055b9255fa03 Waiting 07:44:53 b176d7edde70 Waiting 07:44:53 1e017ebebdbd Pulling fs layer 07:44:53 55f2b468da67 Pulling fs layer 07:44:53 82bfc142787e Pulling fs layer 07:44:53 46baca71a4ef Pulling fs layer 07:44:53 b0e0ef7895f4 Pulling fs layer 07:44:53 c0c90eeb8aca Pulling fs layer 07:44:53 5cfb27c10ea5 Pulling fs layer 07:44:53 40a5eed61bb0 Pulling fs layer 07:44:53 e040ea11fa10 Pulling fs layer 07:44:53 09d5a3f70313 Pulling fs layer 07:44:53 356f5c2c843b Pulling fs layer 07:44:53 1e017ebebdbd Waiting 07:44:53 55f2b468da67 Waiting 07:44:53 82bfc142787e Waiting 07:44:53 46baca71a4ef Waiting 07:44:53 b0e0ef7895f4 Waiting 07:44:53 c0c90eeb8aca Waiting 07:44:53 5cfb27c10ea5 Waiting 07:44:53 40a5eed61bb0 Waiting 07:44:53 e040ea11fa10 Waiting 07:44:53 09d5a3f70313 Waiting 07:44:53 da9db072f522 Downloading [> ] 48.06kB/3.624MB 07:44:53 da9db072f522 Downloading [> ] 48.06kB/3.624MB 07:44:53 da9db072f522 Downloading [> ] 48.06kB/3.624MB 07:44:53 356f5c2c843b Waiting 07:44:53 2d429b9e73a6 Pulling fs layer 07:44:53 46eab5b44a35 Pulling fs layer 07:44:53 c4d302cc468d Pulling fs layer 07:44:53 2d429b9e73a6 Waiting 07:44:53 46eab5b44a35 Waiting 07:44:53 01e0882c90d9 Pulling fs layer 07:44:53 c4d302cc468d Waiting 07:44:53 531ee2cf3c0c Pulling fs layer 07:44:53 01e0882c90d9 Waiting 07:44:53 ed54a7dee1d8 Pulling fs layer 07:44:53 12c5c803443f Pulling fs layer 07:44:53 531ee2cf3c0c Waiting 07:44:53 e27c75a98748 Pulling fs layer 07:44:53 e73cb4a42719 Pulling fs layer 07:44:53 ed54a7dee1d8 Waiting 07:44:53 a83b68436f09 Pulling fs layer 07:44:53 12c5c803443f Waiting 07:44:53 e27c75a98748 Waiting 07:44:53 e73cb4a42719 Waiting 07:44:53 a83b68436f09 Waiting 07:44:53 787d6bee9571 Pulling fs layer 07:44:53 13ff0988aaea Pulling fs layer 07:44:53 4b82842ab819 Pulling fs layer 07:44:53 7e568a0dc8fb Pulling fs layer 07:44:53 787d6bee9571 Waiting 07:44:53 4b82842ab819 Waiting 07:44:53 7e568a0dc8fb Waiting 07:44:53 13ff0988aaea Waiting 07:44:53 9fa9226be034 Pulling fs layer 07:44:53 1617e25568b2 Pulling fs layer 07:44:53 6ac0e4adf315 Pulling fs layer 07:44:53 9fa9226be034 Waiting 07:44:53 1617e25568b2 Waiting 07:44:53 f3b09c502777 Pulling fs layer 07:44:53 408012a7b118 Pulling fs layer 07:44:53 44986281b8b9 Pulling fs layer 07:44:53 f3b09c502777 Waiting 07:44:53 408012a7b118 Waiting 07:44:53 6ac0e4adf315 Waiting 07:44:53 44986281b8b9 Waiting 07:44:53 bf70c5107ab5 Pulling fs layer 07:44:53 1ccde423731d Pulling fs layer 07:44:53 bf70c5107ab5 Waiting 07:44:53 7221d93db8a9 Pulling fs layer 07:44:53 7df673c7455d Pulling fs layer 07:44:53 1ccde423731d Waiting 07:44:53 7221d93db8a9 Waiting 07:44:53 7df673c7455d Waiting 07:44:53 5e06c6bed798 Downloading [==================================================>] 296B/296B 07:44:53 5e06c6bed798 Verifying Checksum 07:44:53 5e06c6bed798 Download complete 07:44:53 eca0188f477e Pulling fs layer 07:44:53 e444bcd4d577 Pulling fs layer 07:44:53 eabd8714fec9 Pulling fs layer 07:44:53 45fd2fec8a19 Pulling fs layer 07:44:53 8f10199ed94b Pulling fs layer 07:44:53 f963a77d2726 Pulling fs layer 07:44:53 f3a82e9f1761 Pulling fs layer 07:44:53 e444bcd4d577 Waiting 07:44:53 eca0188f477e Waiting 07:44:53 8f10199ed94b Waiting 07:44:53 79161a3f5362 Pulling fs layer 07:44:53 9c266ba63f51 Pulling fs layer 07:44:53 f3a82e9f1761 Waiting 07:44:53 eabd8714fec9 Waiting 07:44:53 79161a3f5362 Waiting 07:44:53 2e8a7df9c2ee Pulling fs layer 07:44:53 10f05dd8b1db Pulling fs layer 07:44:53 41dac8b43ba6 Pulling fs layer 07:44:53 9c266ba63f51 Waiting 07:44:53 71a9f6a9ab4d Pulling fs layer 07:44:53 da3ed5db7103 Pulling fs layer 07:44:53 10f05dd8b1db Waiting 07:44:53 41dac8b43ba6 Waiting 07:44:53 2e8a7df9c2ee Waiting 07:44:53 c955f6e31a04 Pulling fs layer 07:44:53 da3ed5db7103 Waiting 07:44:53 c955f6e31a04 Waiting 07:44:53 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 07:44:53 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 07:44:53 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 07:44:53 684be6598fc9 Download complete 07:44:53 da9db072f522 Download complete 07:44:53 da9db072f522 Download complete 07:44:53 da9db072f522 Download complete 07:44:53 da9db072f522 Extracting [> ] 65.54kB/3.624MB 07:44:53 da9db072f522 Extracting [> ] 65.54kB/3.624MB 07:44:53 da9db072f522 Extracting [> ] 65.54kB/3.624MB 07:44:53 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 07:44:53 0d92cad902ba Verifying Checksum 07:44:53 0d92cad902ba Download complete 07:44:53 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 07:44:53 eb7cda286a15 Verifying Checksum 07:44:53 eb7cda286a15 Download complete 07:44:53 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 07:44:53 da9db072f522 Pulling fs layer 07:44:53 19ede2622bd6 Pulling fs layer 07:44:53 81f92f6326a0 Pulling fs layer 07:44:53 774184111a51 Pulling fs layer 07:44:53 ba3bfa42d232 Pulling fs layer 07:44:53 8e7191d1a9d6 Pulling fs layer 07:44:53 43449fa9f0bf Pulling fs layer 07:44:53 25fd4437207e Pulling fs layer 07:44:53 da9db072f522 Extracting [> ] 65.54kB/3.624MB 07:44:53 19ede2622bd6 Waiting 07:44:53 81f92f6326a0 Waiting 07:44:53 774184111a51 Waiting 07:44:53 ba3bfa42d232 Waiting 07:44:53 43449fa9f0bf Waiting 07:44:53 25fd4437207e Waiting 07:44:53 8e7191d1a9d6 Waiting 07:44:54 4ba79830ebce Downloading [> ] 539.6kB/166.8MB 07:44:54 96e38c8865ba Downloading [========> ] 12.43MB/71.91MB 07:44:54 96e38c8865ba Downloading [========> ] 12.43MB/71.91MB 07:44:54 da9db072f522 Extracting [===========> ] 852kB/3.624MB 07:44:54 da9db072f522 Extracting [===========> ] 852kB/3.624MB 07:44:54 da9db072f522 Extracting [===========> ] 852kB/3.624MB 07:44:54 da9db072f522 Extracting [===========> ] 852kB/3.624MB 07:44:54 dcc0c3b2850c Downloading [====> ] 7.568MB/76.12MB 07:44:54 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 07:44:54 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 07:44:54 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 07:44:54 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 07:44:54 4ba79830ebce Downloading [==> ] 7.028MB/166.8MB 07:44:54 96e38c8865ba Downloading [==================> ] 27.03MB/71.91MB 07:44:54 96e38c8865ba Downloading [==================> ] 27.03MB/71.91MB 07:44:54 da9db072f522 Pull complete 07:44:54 da9db072f522 Pull complete 07:44:54 da9db072f522 Pull complete 07:44:54 da9db072f522 Pull complete 07:44:54 dcc0c3b2850c Downloading [=========> ] 14.6MB/76.12MB 07:44:54 4ba79830ebce Downloading [====> ] 13.52MB/166.8MB 07:44:54 96e38c8865ba Downloading [============================> ] 41.63MB/71.91MB 07:44:54 96e38c8865ba Downloading [============================> ] 41.63MB/71.91MB 07:44:54 dcc0c3b2850c Downloading [================> ] 24.87MB/76.12MB 07:44:54 4ba79830ebce Downloading [======> ] 22.71MB/166.8MB 07:44:54 96e38c8865ba Downloading [=======================================> ] 57.31MB/71.91MB 07:44:54 96e38c8865ba Downloading [=======================================> ] 57.31MB/71.91MB 07:44:54 dcc0c3b2850c Downloading [==========================> ] 40.55MB/76.12MB 07:44:54 4ba79830ebce Downloading [==========> ] 33.52MB/166.8MB 07:44:54 96e38c8865ba Verifying Checksum 07:44:54 96e38c8865ba Verifying Checksum 07:44:54 96e38c8865ba Download complete 07:44:54 96e38c8865ba Download complete 07:44:54 d223479d7367 Downloading [> ] 80.82kB/6.742MB 07:44:54 dcc0c3b2850c Downloading [====================================> ] 55.15MB/76.12MB 07:44:54 4ba79830ebce Downloading [==============> ] 47.58MB/166.8MB 07:44:54 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 07:44:54 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 07:44:54 d223479d7367 Verifying Checksum 07:44:54 d223479d7367 Download complete 07:44:54 dcc0c3b2850c Downloading [==============================================> ] 71.37MB/76.12MB 07:44:54 4ba79830ebce Downloading [===================> ] 64.88MB/166.8MB 07:44:54 dcc0c3b2850c Verifying Checksum 07:44:54 dcc0c3b2850c Download complete 07:44:54 7ce9630189bb Downloading [> ] 326.6kB/31.04MB 07:44:54 2d7f854c01cf Downloading [==================================================>] 372B/372B 07:44:54 2d7f854c01cf Download complete 07:44:54 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 07:44:54 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 07:44:54 8e665a4a2af9 Downloading [> ] 539.6kB/107.2MB 07:44:54 4ba79830ebce Downloading [========================> ] 82.72MB/166.8MB 07:44:54 7ce9630189bb Downloading [====> ] 2.817MB/31.04MB 07:44:54 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 07:44:54 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 07:44:54 8e665a4a2af9 Downloading [=> ] 3.784MB/107.2MB 07:44:54 4ba79830ebce Downloading [=============================> ] 98.94MB/166.8MB 07:44:54 7ce9630189bb Downloading [==========> ] 6.553MB/31.04MB 07:44:54 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 07:44:54 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 07:44:54 8e665a4a2af9 Downloading [===> ] 7.028MB/107.2MB 07:44:54 4ba79830ebce Downloading [==================================> ] 114.6MB/166.8MB 07:44:54 7ce9630189bb Downloading [==================> ] 11.22MB/31.04MB 07:44:54 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 07:44:54 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 07:44:55 8e665a4a2af9 Downloading [========> ] 18.92MB/107.2MB 07:44:55 4ba79830ebce Downloading [========================================> ] 134.6MB/166.8MB 07:44:55 7ce9630189bb Downloading [======================================> ] 23.99MB/31.04MB 07:44:55 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 07:44:55 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 07:44:55 7ce9630189bb Verifying Checksum 07:44:55 8e665a4a2af9 Downloading [===============> ] 34.06MB/107.2MB 07:44:55 219d845251ba Downloading [> ] 539.6kB/108.2MB 07:44:55 4ba79830ebce Downloading [==============================================> ] 153.5MB/166.8MB 07:44:55 96e38c8865ba Extracting [========================> ] 35.65MB/71.91MB 07:44:55 96e38c8865ba Extracting [========================> ] 35.65MB/71.91MB 07:44:55 4ba79830ebce Verifying Checksum 07:44:55 4ba79830ebce Download complete 07:44:55 8e665a4a2af9 Downloading [======================> ] 49.2MB/107.2MB 07:44:55 e5d7009d9e55 Downloading [==================================================>] 295B/295B 07:44:55 e5d7009d9e55 Verifying Checksum 07:44:55 e5d7009d9e55 Download complete 07:44:55 219d845251ba Downloading [=> ] 3.784MB/108.2MB 07:44:55 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 07:44:55 1ec5fb03eaee Download complete 07:44:55 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 07:44:55 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 07:44:55 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 07:44:55 d3165a332ae3 Download complete 07:44:55 8e665a4a2af9 Downloading [==============================> ] 65.96MB/107.2MB 07:44:55 4ba79830ebce Extracting [> ] 557.1kB/166.8MB 07:44:55 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 07:44:55 219d845251ba Downloading [===> ] 8.109MB/108.2MB 07:44:55 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 07:44:55 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 07:44:55 8e665a4a2af9 Downloading [=====================================> ] 80.02MB/107.2MB 07:44:55 c124ba1a8b26 Downloading [====> ] 8.109MB/91.87MB 07:44:55 219d845251ba Downloading [========> ] 18.38MB/108.2MB 07:44:55 4ba79830ebce Extracting [=> ] 4.456MB/166.8MB 07:44:55 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 07:44:55 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 07:44:55 8e665a4a2af9 Downloading [============================================> ] 95.7MB/107.2MB 07:44:55 c124ba1a8b26 Downloading [=========> ] 17.84MB/91.87MB 07:44:55 219d845251ba Downloading [=============> ] 30.28MB/108.2MB 07:44:55 4ba79830ebce Extracting [=====> ] 18.38MB/166.8MB 07:44:55 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 07:44:55 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 07:44:55 8e665a4a2af9 Verifying Checksum 07:44:55 8e665a4a2af9 Download complete 07:44:55 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 07:44:55 6394804c2196 Download complete 07:44:55 c124ba1a8b26 Downloading [================> ] 29.74MB/91.87MB 07:44:55 219d845251ba Downloading [====================> ] 44.87MB/108.2MB 07:44:55 4ba79830ebce Extracting [=========> ] 31.2MB/166.8MB 07:44:55 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 07:44:55 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 07:44:55 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 07:44:55 f18232174bc9 Verifying Checksum 07:44:55 f18232174bc9 Download complete 07:44:55 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 07:44:55 c124ba1a8b26 Downloading [=========================> ] 46.5MB/91.87MB 07:44:55 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 07:44:55 e60d9caeb0b8 Verifying Checksum 07:44:55 e60d9caeb0b8 Download complete 07:44:55 219d845251ba Downloading [============================> ] 61.64MB/108.2MB 07:44:55 4ba79830ebce Extracting [===========> ] 39.55MB/166.8MB 07:44:55 f61a19743345 Downloading [> ] 48.06kB/3.524MB 07:44:55 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 07:44:55 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 07:44:55 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 07:44:55 c124ba1a8b26 Downloading [===================================> ] 64.34MB/91.87MB 07:44:55 219d845251ba Downloading [====================================> ] 78.94MB/108.2MB 07:44:55 4ba79830ebce Extracting [==============> ] 47.35MB/166.8MB 07:44:55 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 07:44:55 f61a19743345 Verifying Checksum 07:44:55 f61a19743345 Download complete 07:44:55 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 07:44:55 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB 07:44:55 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB 07:44:55 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 07:44:55 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 07:44:55 c124ba1a8b26 Downloading [===========================================> ] 79.48MB/91.87MB 07:44:55 219d845251ba Downloading [==========================================> ] 92.45MB/108.2MB 07:44:55 4ba79830ebce Extracting [=================> ] 59.05MB/166.8MB 07:44:55 f18232174bc9 Pull complete 07:44:55 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 07:44:55 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 07:44:56 8af57d8c9f49 Downloading [=================================> ] 5.799MB/8.735MB 07:44:56 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 07:44:56 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 07:44:56 c124ba1a8b26 Verifying Checksum 07:44:56 c124ba1a8b26 Download complete 07:44:56 8af57d8c9f49 Verifying Checksum 07:44:56 8af57d8c9f49 Download complete 07:44:56 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 07:44:56 219d845251ba Downloading [================================================> ] 106MB/108.2MB 07:44:56 4ba79830ebce Extracting [===================> ] 66.29MB/166.8MB 07:44:56 219d845251ba Verifying Checksum 07:44:56 219d845251ba Download complete 07:44:56 c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB 07:44:56 c53a11b7c6fc Verifying Checksum 07:44:56 c53a11b7c6fc Download complete 07:44:56 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 07:44:56 e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB 07:44:56 e032d0a5e409 Verifying Checksum 07:44:56 e032d0a5e409 Download complete 07:44:56 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 07:44:56 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 07:44:56 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 07:44:56 055b9255fa03 Verifying Checksum 07:44:56 055b9255fa03 Download complete 07:44:56 384497dbce3b Downloading [> ] 539.6kB/63.48MB 07:44:56 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 07:44:56 b176d7edde70 Download complete 07:44:56 e60d9caeb0b8 Pull complete 07:44:56 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 07:44:56 f61a19743345 Extracting [> ] 65.54kB/3.524MB 07:44:56 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 07:44:56 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 07:44:56 4ba79830ebce Extracting [=====================> ] 72.97MB/166.8MB 07:44:56 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 07:44:56 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 07:44:56 c49e0ee60bfb Downloading [===> ] 7.568MB/107.3MB 07:44:56 384497dbce3b Downloading [=====> ] 7.568MB/63.48MB 07:44:56 1e017ebebdbd Downloading [========> ] 6.405MB/37.19MB 07:44:56 f61a19743345 Extracting [====> ] 327.7kB/3.524MB 07:44:56 4ba79830ebce Extracting [========================> ] 80.77MB/166.8MB 07:44:56 96e38c8865ba Pull complete 07:44:56 96e38c8865ba Pull complete 07:44:56 c49e0ee60bfb Downloading [========> ] 18.38MB/107.3MB 07:44:56 e5d7009d9e55 Extracting [==================================================>] 295B/295B 07:44:56 5e06c6bed798 Extracting [==================================================>] 296B/296B 07:44:56 384497dbce3b Downloading [==============> ] 18.92MB/63.48MB 07:44:56 5e06c6bed798 Extracting [==================================================>] 296B/296B 07:44:56 e5d7009d9e55 Extracting [==================================================>] 295B/295B 07:44:56 1e017ebebdbd Downloading [===============> ] 11.68MB/37.19MB 07:44:56 f61a19743345 Extracting [====================================> ] 2.556MB/3.524MB 07:44:56 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 07:44:56 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 07:44:56 4ba79830ebce Extracting [=========================> ] 85.79MB/166.8MB 07:44:56 c49e0ee60bfb Downloading [================> ] 34.6MB/107.3MB 07:44:56 384497dbce3b Downloading [==========================> ] 34.06MB/63.48MB 07:44:56 e5d7009d9e55 Pull complete 07:44:56 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 07:44:56 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 07:44:56 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 07:44:56 1e017ebebdbd Downloading [======================> ] 16.96MB/37.19MB 07:44:56 5e06c6bed798 Pull complete 07:44:56 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 07:44:56 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 07:44:56 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 07:44:56 f61a19743345 Pull complete 07:44:56 4ba79830ebce Extracting [==========================> ] 89.13MB/166.8MB 07:44:56 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 07:44:56 c49e0ee60bfb Downloading [=====================> ] 46.5MB/107.3MB 07:44:56 384497dbce3b Downloading [====================================> ] 46.5MB/63.48MB 07:44:56 1e017ebebdbd Downloading [===============================> ] 23.36MB/37.19MB 07:44:56 1ec5fb03eaee Pull complete 07:44:56 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 07:44:56 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 07:44:56 4ba79830ebce Extracting [===========================> ] 91.36MB/166.8MB 07:44:56 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 07:44:56 c49e0ee60bfb Downloading [===========================> ] 59.47MB/107.3MB 07:44:56 384497dbce3b Downloading [===============================================> ] 60.55MB/63.48MB 07:44:56 684be6598fc9 Pull complete 07:44:56 384497dbce3b Verifying Checksum 07:44:56 384497dbce3b Download complete 07:44:56 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 07:44:56 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 07:44:56 1e017ebebdbd Downloading [===============================================> ] 35.04MB/37.19MB 07:44:56 1e017ebebdbd Verifying Checksum 07:44:56 1e017ebebdbd Download complete 07:44:56 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 07:44:56 4ba79830ebce Extracting [============================> ] 94.7MB/166.8MB 07:44:56 8af57d8c9f49 Extracting [=======================> ] 4.03MB/8.735MB 07:44:56 c49e0ee60bfb Downloading [==================================> ] 74.61MB/107.3MB 07:44:56 d3165a332ae3 Pull complete 07:44:56 82bfc142787e Downloading [> ] 97.22kB/8.613MB 07:44:56 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB 07:44:56 8af57d8c9f49 Extracting [======================================> ] 6.685MB/8.735MB 07:44:56 0d92cad902ba Pull complete 07:44:56 4ba79830ebce Extracting [=============================> ] 98.6MB/166.8MB 07:44:56 c49e0ee60bfb Downloading [==========================================> ] 90.83MB/107.3MB 07:44:56 82bfc142787e Downloading [==================================> ] 5.897MB/8.613MB 07:44:56 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 07:44:56 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 07:44:56 82bfc142787e Verifying Checksum 07:44:56 82bfc142787e Download complete 07:44:56 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 07:44:56 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 07:44:56 46baca71a4ef Verifying Checksum 07:44:56 46baca71a4ef Download complete 07:44:56 55f2b468da67 Downloading [==> ] 14.6MB/257.9MB 07:44:56 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 07:44:56 4ba79830ebce Extracting [==============================> ] 102.5MB/166.8MB 07:44:56 c49e0ee60bfb Downloading [================================================> ] 104.9MB/107.3MB 07:44:56 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 07:44:56 c49e0ee60bfb Verifying Checksum 07:44:56 c49e0ee60bfb Download complete 07:44:56 1e017ebebdbd Extracting [=====> ] 3.932MB/37.19MB 07:44:56 8af57d8c9f49 Pull complete 07:44:56 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 07:44:56 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 07:44:56 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 07:44:56 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 07:44:56 c0c90eeb8aca Verifying Checksum 07:44:56 c0c90eeb8aca Download complete 07:44:56 c124ba1a8b26 Extracting [===> ] 7.242MB/91.87MB 07:44:56 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 07:44:56 5cfb27c10ea5 Verifying Checksum 07:44:56 5cfb27c10ea5 Download complete 07:44:57 40a5eed61bb0 Download complete 07:44:57 55f2b468da67 Downloading [====> ] 24.33MB/257.9MB 07:44:57 4ba79830ebce Extracting [===============================> ] 105.8MB/166.8MB 07:44:57 e040ea11fa10 Downloading [==================================================>] 173B/173B 07:44:57 e040ea11fa10 Verifying Checksum 07:44:57 e040ea11fa10 Download complete 07:44:57 dcc0c3b2850c Extracting [=====> ] 7.799MB/76.12MB 07:44:57 1e017ebebdbd Extracting [==========> ] 7.471MB/37.19MB 07:44:57 c124ba1a8b26 Extracting [======> ] 12.81MB/91.87MB 07:44:57 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 07:44:57 b0e0ef7895f4 Downloading [===> ] 2.26MB/37.01MB 07:44:57 55f2b468da67 Downloading [======> ] 35.14MB/257.9MB 07:44:57 c53a11b7c6fc Pull complete 07:44:57 dcc0c3b2850c Extracting [=========> ] 13.93MB/76.12MB 07:44:57 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 07:44:57 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 07:44:57 4ba79830ebce Extracting [================================> ] 109.2MB/166.8MB 07:44:57 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 07:44:57 c124ba1a8b26 Extracting [=========> ] 17.83MB/91.87MB 07:44:57 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 07:44:57 b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 07:44:57 55f2b468da67 Downloading [========> ] 43.79MB/257.9MB 07:44:57 1e017ebebdbd Extracting [=================> ] 13.37MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [=============> ] 20.61MB/76.12MB 07:44:57 4ba79830ebce Extracting [=================================> ] 112.5MB/166.8MB 07:44:57 c124ba1a8b26 Extracting [=============> ] 25.07MB/91.87MB 07:44:57 09d5a3f70313 Downloading [==> ] 6.487MB/109.2MB 07:44:57 b0e0ef7895f4 Downloading [==========> ] 7.536MB/37.01MB 07:44:57 55f2b468da67 Downloading [==========> ] 56.23MB/257.9MB 07:44:57 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [===================> ] 28.97MB/76.12MB 07:44:57 b0e0ef7895f4 Downloading [============> ] 9.42MB/37.01MB 07:44:57 c124ba1a8b26 Extracting [================> ] 31.2MB/91.87MB 07:44:57 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 07:44:57 4ba79830ebce Extracting [==================================> ] 116.4MB/166.8MB 07:44:57 55f2b468da67 Downloading [============> ] 62.18MB/257.9MB 07:44:57 1e017ebebdbd Extracting [==========================> ] 19.66MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [=====================> ] 32.87MB/76.12MB 07:44:57 c124ba1a8b26 Extracting [======================> ] 40.67MB/91.87MB 07:44:57 4ba79830ebce Extracting [===================================> ] 119.2MB/166.8MB 07:44:57 b0e0ef7895f4 Downloading [==================> ] 13.94MB/37.01MB 07:44:57 55f2b468da67 Downloading [==============> ] 75.15MB/257.9MB 07:44:57 09d5a3f70313 Downloading [======> ] 13.52MB/109.2MB 07:44:57 1e017ebebdbd Extracting [===============================> ] 23.2MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [============================> ] 43.45MB/76.12MB 07:44:57 e032d0a5e409 Pull complete 07:44:57 b0e0ef7895f4 Downloading [=======================> ] 17.33MB/37.01MB 07:44:57 55f2b468da67 Downloading [================> ] 83.8MB/257.9MB 07:44:57 09d5a3f70313 Downloading [========> ] 17.84MB/109.2MB 07:44:57 c124ba1a8b26 Extracting [==========================> ] 48.46MB/91.87MB 07:44:57 4ba79830ebce Extracting [====================================> ] 122MB/166.8MB 07:44:57 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [================================> ] 49.58MB/76.12MB 07:44:57 b0e0ef7895f4 Downloading [=============================> ] 21.48MB/37.01MB 07:44:57 55f2b468da67 Downloading [===================> ] 98.4MB/257.9MB 07:44:57 c124ba1a8b26 Extracting [===============================> ] 57.93MB/91.87MB 07:44:57 09d5a3f70313 Downloading [=============> ] 29.74MB/109.2MB 07:44:57 4ba79830ebce Extracting [=====================================> ] 125.3MB/166.8MB 07:44:57 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 07:44:57 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB 07:44:57 dcc0c3b2850c Extracting [=====================================> ] 57.38MB/76.12MB 07:44:57 55f2b468da67 Downloading [=====================> ] 110.8MB/257.9MB 07:44:57 b0e0ef7895f4 Downloading [===================================> ] 26MB/37.01MB 07:44:57 c124ba1a8b26 Extracting [===================================> ] 65.73MB/91.87MB 07:44:57 09d5a3f70313 Downloading [===================> ] 41.63MB/109.2MB 07:44:57 4ba79830ebce Extracting [======================================> ] 129.2MB/166.8MB 07:44:57 1e017ebebdbd Extracting [============================================> ] 33.03MB/37.19MB 07:44:57 c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB 07:44:57 dcc0c3b2850c Extracting [==========================================> ] 65.18MB/76.12MB 07:44:57 55f2b468da67 Downloading [======================> ] 118.4MB/257.9MB 07:44:57 b0e0ef7895f4 Downloading [==============================================> ] 34.29MB/37.01MB 07:44:57 c124ba1a8b26 Extracting [=======================================> ] 71.86MB/91.87MB 07:44:57 09d5a3f70313 Downloading [=======================> ] 51.36MB/109.2MB 07:44:57 b0e0ef7895f4 Verifying Checksum 07:44:57 b0e0ef7895f4 Download complete 07:44:57 4ba79830ebce Extracting [=======================================> ] 132.6MB/166.8MB 07:44:57 c49e0ee60bfb Extracting [==> ] 5.014MB/107.3MB 07:44:57 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 07:44:57 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 07:44:57 356f5c2c843b Download complete 07:44:58 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB 07:44:58 dcc0c3b2850c Extracting [===============================================> ] 72.42MB/76.12MB 07:44:58 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 07:44:58 55f2b468da67 Downloading [========================> ] 128.7MB/257.9MB 07:44:58 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 07:44:58 c124ba1a8b26 Extracting [===========================================> ] 80.22MB/91.87MB 07:44:58 09d5a3f70313 Downloading [============================> ] 61.64MB/109.2MB 07:44:58 dcc0c3b2850c Pull complete 07:44:58 4ba79830ebce Extracting [========================================> ] 135.9MB/166.8MB 07:44:58 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 07:44:58 c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB 07:44:58 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 07:44:58 1e017ebebdbd Extracting [=================================================> ] 36.57MB/37.19MB 07:44:58 2d429b9e73a6 Downloading [===============> ] 8.846MB/29.13MB 07:44:58 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 07:44:58 55f2b468da67 Downloading [==========================> ] 139MB/257.9MB 07:44:58 c124ba1a8b26 Extracting [================================================> ] 89.69MB/91.87MB 07:44:58 09d5a3f70313 Downloading [=================================> ] 74.07MB/109.2MB 07:44:58 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 07:44:58 4ba79830ebce Extracting [=========================================> ] 139.3MB/166.8MB 07:44:58 2d429b9e73a6 Downloading [==============================> ] 17.69MB/29.13MB 07:44:58 c124ba1a8b26 Pull complete 07:44:58 c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB 07:44:58 1e017ebebdbd Pull complete 07:44:58 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 07:44:58 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 07:44:58 55f2b468da67 Downloading [============================> ] 149.2MB/257.9MB 07:44:58 09d5a3f70313 Downloading [========================================> ] 88.67MB/109.2MB 07:44:58 eb7cda286a15 Pull complete 07:44:58 api Pulled 07:44:58 4ba79830ebce Extracting [==========================================> ] 143.2MB/166.8MB 07:44:58 2d429b9e73a6 Downloading [=================================================> ] 28.9MB/29.13MB 07:44:58 2d429b9e73a6 Verifying Checksum 07:44:58 2d429b9e73a6 Download complete 07:44:58 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 07:44:58 46eab5b44a35 Verifying Checksum 07:44:58 46eab5b44a35 Download complete 07:44:58 55f2b468da67 Downloading [================================> ] 166.5MB/257.9MB 07:44:58 c49e0ee60bfb Extracting [======> ] 14.48MB/107.3MB 07:44:58 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 07:44:58 09d5a3f70313 Downloading [================================================> ] 104.9MB/109.2MB 07:44:58 6394804c2196 Pull complete 07:44:58 pap Pulled 07:44:58 09d5a3f70313 Verifying Checksum 07:44:58 09d5a3f70313 Download complete 07:44:58 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 07:44:58 c4d302cc468d Verifying Checksum 07:44:58 c4d302cc468d Download complete 07:44:58 4ba79830ebce Extracting [============================================> ] 147.6MB/166.8MB 07:44:58 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 07:44:58 01e0882c90d9 Verifying Checksum 07:44:58 01e0882c90d9 Download complete 07:44:58 55f2b468da67 Downloading [==================================> ] 179.5MB/257.9MB 07:44:58 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 07:44:58 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 07:44:58 c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB 07:44:58 ed54a7dee1d8 Verifying Checksum 07:44:58 ed54a7dee1d8 Download complete 07:44:58 12c5c803443f Downloading [==================================================>] 116B/116B 07:44:58 12c5c803443f Verifying Checksum 07:44:58 12c5c803443f Download complete 07:44:58 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 07:44:58 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 07:44:58 e27c75a98748 Verifying Checksum 07:44:58 e27c75a98748 Download complete 07:44:58 4ba79830ebce Extracting [=============================================> ] 151MB/166.8MB 07:44:58 531ee2cf3c0c Verifying Checksum 07:44:58 531ee2cf3c0c Download complete 07:44:58 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 07:44:58 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 07:44:58 a83b68436f09 Verifying Checksum 07:44:58 a83b68436f09 Download complete 07:44:58 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 07:44:58 55f2b468da67 Downloading [=====================================> ] 194.6MB/257.9MB 07:44:58 2d429b9e73a6 Extracting [======> ] 3.539MB/29.13MB 07:44:58 787d6bee9571 Downloading [==================================================>] 127B/127B 07:44:58 787d6bee9571 Verifying Checksum 07:44:58 787d6bee9571 Download complete 07:44:58 13ff0988aaea Download complete 07:44:58 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 07:44:58 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 07:44:58 4b82842ab819 Verifying Checksum 07:44:58 4b82842ab819 Download complete 07:44:58 7e568a0dc8fb Downloading [==================================================>] 184B/184B 07:44:58 7e568a0dc8fb Verifying Checksum 07:44:58 7e568a0dc8fb Download complete 07:44:58 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 07:44:58 9fa9226be034 Downloading [> ] 15.3kB/783kB 07:44:58 9fa9226be034 Downloading [==================================================>] 783kB/783kB 07:44:58 9fa9226be034 Verifying Checksum 07:44:58 9fa9226be034 Extracting [==> ] 32.77kB/783kB 07:44:58 4ba79830ebce Extracting [==============================================> ] 155.4MB/166.8MB 07:44:58 e73cb4a42719 Downloading [====> ] 9.731MB/109.1MB 07:44:58 55f2b468da67 Downloading [========================================> ] 208.7MB/257.9MB 07:44:58 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 07:44:58 2d429b9e73a6 Extracting [============> ] 7.373MB/29.13MB 07:44:58 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 07:44:58 1617e25568b2 Verifying Checksum 07:44:58 1617e25568b2 Download complete 07:44:58 c49e0ee60bfb Extracting [=========> ] 20.05MB/107.3MB 07:44:58 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 07:44:58 e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB 07:44:58 55f2b468da67 Downloading [===========================================> ] 225.5MB/257.9MB 07:44:58 4ba79830ebce Extracting [===============================================> ] 158.2MB/166.8MB 07:44:58 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 07:44:58 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 07:44:58 9fa9226be034 Extracting [==================================================>] 783kB/783kB 07:44:58 c49e0ee60bfb Extracting [===========> ] 23.95MB/107.3MB 07:44:58 6ac0e4adf315 Downloading [===> ] 3.784MB/62.07MB 07:44:58 9fa9226be034 Pull complete 07:44:58 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 07:44:58 55f2b468da67 Downloading [==============================================> ] 237.9MB/257.9MB 07:44:58 e73cb4a42719 Downloading [==============> ] 30.82MB/109.1MB 07:44:58 2d429b9e73a6 Extracting [=====================> ] 12.39MB/29.13MB 07:44:58 c49e0ee60bfb Extracting [==============> ] 30.08MB/107.3MB 07:44:58 4ba79830ebce Extracting [===============================================> ] 159.9MB/166.8MB 07:44:58 6ac0e4adf315 Downloading [======> ] 8.109MB/62.07MB 07:44:58 55f2b468da67 Downloading [================================================> ] 251.4MB/257.9MB 07:44:58 2d429b9e73a6 Extracting [==========================> ] 15.34MB/29.13MB 07:44:58 e73cb4a42719 Downloading [=====================> ] 45.96MB/109.1MB 07:44:58 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 07:44:59 55f2b468da67 Verifying Checksum 07:44:59 55f2b468da67 Download complete 07:44:59 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB 07:44:59 c49e0ee60bfb Extracting [================> ] 35.09MB/107.3MB 07:44:59 4ba79830ebce Extracting [=================================================> ] 163.8MB/166.8MB 07:44:59 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 07:44:59 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 07:44:59 2d429b9e73a6 Extracting [===============================> ] 18.28MB/29.13MB 07:44:59 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 07:44:59 e73cb4a42719 Downloading [==========================> ] 56.77MB/109.1MB 07:44:59 6ac0e4adf315 Downloading [================> ] 20.54MB/62.07MB 07:44:59 f3b09c502777 Downloading [=====> ] 5.946MB/56.52MB 07:44:59 c49e0ee60bfb Extracting [=================> ] 37.88MB/107.3MB 07:44:59 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB 07:44:59 e73cb4a42719 Downloading [==============================> ] 66.5MB/109.1MB 07:44:59 1617e25568b2 Pull complete 07:44:59 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 07:44:59 4ba79830ebce Extracting [=================================================> ] 166MB/166.8MB 07:44:59 6ac0e4adf315 Downloading [=========================> ] 31.9MB/62.07MB 07:44:59 f3b09c502777 Downloading [============> ] 14.06MB/56.52MB 07:44:59 c49e0ee60bfb Extracting [==================> ] 40.67MB/107.3MB 07:44:59 e73cb4a42719 Downloading [=====================================> ] 81.1MB/109.1MB 07:44:59 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB 07:44:59 55f2b468da67 Extracting [==> ] 11.14MB/257.9MB 07:44:59 4ba79830ebce Extracting [==================================================>] 166.8MB/166.8MB 07:44:59 6ac0e4adf315 Downloading [====================================> ] 44.87MB/62.07MB 07:44:59 f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 07:44:59 c49e0ee60bfb Extracting [===================> ] 42.89MB/107.3MB 07:44:59 e73cb4a42719 Downloading [==========================================> ] 91.91MB/109.1MB 07:44:59 2d429b9e73a6 Extracting [===========================================> ] 25.07MB/29.13MB 07:44:59 4ba79830ebce Pull complete 07:44:59 6ac0e4adf315 Downloading [===============================================> ] 58.39MB/62.07MB 07:44:59 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 07:44:59 d223479d7367 Extracting [> ] 98.3kB/6.742MB 07:44:59 f3b09c502777 Downloading [==============================> ] 34.06MB/56.52MB 07:44:59 6ac0e4adf315 Verifying Checksum 07:44:59 6ac0e4adf315 Download complete 07:44:59 e73cb4a42719 Downloading [=================================================> ] 107.1MB/109.1MB 07:44:59 c49e0ee60bfb Extracting [=====================> ] 46.24MB/107.3MB 07:44:59 408012a7b118 Downloading [==================================================>] 637B/637B 07:44:59 408012a7b118 Verifying Checksum 07:44:59 408012a7b118 Download complete 07:44:59 e73cb4a42719 Verifying Checksum 07:44:59 e73cb4a42719 Download complete 07:44:59 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 07:44:59 44986281b8b9 Download complete 07:44:59 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 07:44:59 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 07:44:59 bf70c5107ab5 Verifying Checksum 07:44:59 bf70c5107ab5 Download complete 07:44:59 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 07:44:59 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 07:44:59 1ccde423731d Verifying Checksum 07:44:59 1ccde423731d Download complete 07:44:59 7221d93db8a9 Downloading [==================================================>] 100B/100B 07:44:59 7221d93db8a9 Verifying Checksum 07:44:59 7221d93db8a9 Download complete 07:44:59 7df673c7455d Downloading [==================================================>] 694B/694B 07:44:59 7df673c7455d Verifying Checksum 07:44:59 7df673c7455d Download complete 07:44:59 eca0188f477e Downloading [> ] 375.7kB/37.17MB 07:44:59 e444bcd4d577 Download complete 07:44:59 55f2b468da67 Extracting [====> ] 22.84MB/257.9MB 07:44:59 f3b09c502777 Downloading [======================================> ] 43.79MB/56.52MB 07:44:59 eabd8714fec9 Downloading [> ] 539.6kB/375MB 07:44:59 c49e0ee60bfb Extracting [=======================> ] 49.58MB/107.3MB 07:44:59 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 07:44:59 d223479d7367 Extracting [==> ] 294.9kB/6.742MB 07:44:59 eca0188f477e Downloading [==========> ] 7.912MB/37.17MB 07:44:59 f3b09c502777 Downloading [=================================================> ] 55.69MB/56.52MB 07:44:59 f3b09c502777 Verifying Checksum 07:44:59 f3b09c502777 Download complete 07:44:59 6ac0e4adf315 Extracting [==> ] 2.785MB/62.07MB 07:44:59 eabd8714fec9 Downloading [> ] 5.946MB/375MB 07:44:59 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 07:44:59 45fd2fec8a19 Verifying Checksum 07:44:59 45fd2fec8a19 Download complete 07:44:59 d223479d7367 Extracting [=============> ] 1.868MB/6.742MB 07:44:59 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 07:44:59 c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB 07:44:59 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 07:44:59 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 07:44:59 eca0188f477e Downloading [=========================> ] 18.84MB/37.17MB 07:44:59 eabd8714fec9 Downloading [==> ] 15.14MB/375MB 07:44:59 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 07:44:59 d223479d7367 Extracting [========================> ] 3.342MB/6.742MB 07:44:59 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 07:44:59 c49e0ee60bfb Extracting [=========================> ] 54.03MB/107.3MB 07:44:59 8f10199ed94b Downloading [=================> ] 3.145MB/8.768MB 07:44:59 55f2b468da67 Extracting [======> ] 31.2MB/257.9MB 07:44:59 eca0188f477e Downloading [========================================> ] 29.77MB/37.17MB 07:44:59 eabd8714fec9 Downloading [===> ] 28.65MB/375MB 07:44:59 eca0188f477e Verifying Checksum 07:44:59 eca0188f477e Download complete 07:44:59 d223479d7367 Extracting [================================> ] 4.424MB/6.742MB 07:44:59 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 07:44:59 8f10199ed94b Downloading [==========================================> ] 7.372MB/8.768MB 07:44:59 2d429b9e73a6 Pull complete 07:44:59 c49e0ee60bfb Extracting [==========================> ] 56.82MB/107.3MB 07:44:59 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 07:44:59 55f2b468da67 Extracting [=======> ] 38.44MB/257.9MB 07:44:59 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 07:44:59 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 07:44:59 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 07:44:59 f963a77d2726 Verifying Checksum 07:44:59 f963a77d2726 Download complete 07:44:59 8f10199ed94b Verifying Checksum 07:44:59 8f10199ed94b Download complete 07:45:00 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 07:45:00 79161a3f5362 Download complete 07:45:00 eabd8714fec9 Downloading [=====> ] 40.01MB/375MB 07:45:00 d223479d7367 Extracting [==========================================> ] 5.702MB/6.742MB 07:45:00 9c266ba63f51 Download complete 07:45:00 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 07:45:00 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 07:45:00 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 07:45:00 2e8a7df9c2ee Verifying Checksum 07:45:00 2e8a7df9c2ee Download complete 07:45:00 eca0188f477e Extracting [> ] 393.2kB/37.17MB 07:45:00 c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB 07:45:00 10f05dd8b1db Verifying Checksum 07:45:00 10f05dd8b1db Download complete 07:45:00 55f2b468da67 Extracting [=========> ] 47.35MB/257.9MB 07:45:00 41dac8b43ba6 Downloading [==================================================>] 171B/171B 07:45:00 41dac8b43ba6 Verifying Checksum 07:45:00 41dac8b43ba6 Download complete 07:45:00 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 07:45:00 d223479d7367 Extracting [==================================================>] 6.742MB/6.742MB 07:45:00 eabd8714fec9 Downloading [=======> ] 54.07MB/375MB 07:45:00 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 07:45:00 71a9f6a9ab4d Verifying Checksum 07:45:00 71a9f6a9ab4d Download complete 07:45:00 f3a82e9f1761 Downloading [==> ] 2.293MB/44.41MB 07:45:00 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB 07:45:00 eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 07:45:00 c49e0ee60bfb Extracting [============================> ] 61.83MB/107.3MB 07:45:00 46eab5b44a35 Pull complete 07:45:00 55f2b468da67 Extracting [==========> ] 54.59MB/257.9MB 07:45:00 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 07:45:00 d223479d7367 Pull complete 07:45:00 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 07:45:00 eabd8714fec9 Downloading [========> ] 65.96MB/375MB 07:45:00 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB 07:45:00 f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB 07:45:00 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 07:45:00 55f2b468da67 Extracting [===========> ] 61.83MB/257.9MB 07:45:00 c49e0ee60bfb Extracting [==============================> ] 64.62MB/107.3MB 07:45:00 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 07:45:00 eabd8714fec9 Downloading [==========> ] 80.02MB/375MB 07:45:00 da3ed5db7103 Downloading [> ] 1.621MB/127.4MB 07:45:00 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB 07:45:00 f3a82e9f1761 Downloading [========> ] 7.339MB/44.41MB 07:45:00 eca0188f477e Extracting [=============> ] 10.22MB/37.17MB 07:45:00 55f2b468da67 Extracting [=============> ] 67.96MB/257.9MB 07:45:00 c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB 07:45:00 7ce9630189bb Extracting [> ] 327.7kB/31.04MB 07:45:00 c4d302cc468d Extracting [======================================> ] 3.473MB/4.534MB 07:45:00 eabd8714fec9 Downloading [============> ] 91.37MB/375MB 07:45:00 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 07:45:00 da3ed5db7103 Downloading [=> ] 2.702MB/127.4MB 07:45:00 f3a82e9f1761 Downloading [===========> ] 10.09MB/44.41MB 07:45:00 55f2b468da67 Extracting [===============> ] 77.43MB/257.9MB 07:45:00 eca0188f477e Extracting [=================> ] 12.98MB/37.17MB 07:45:00 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 07:45:00 c49e0ee60bfb Extracting [================================> ] 69.63MB/107.3MB 07:45:00 eabd8714fec9 Downloading [=============> ] 102.2MB/375MB 07:45:00 c4d302cc468d Pull complete 07:45:00 7ce9630189bb Extracting [=====> ] 3.277MB/31.04MB 07:45:00 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 07:45:00 da3ed5db7103 Downloading [=> ] 4.324MB/127.4MB 07:45:00 f3a82e9f1761 Downloading [==============> ] 13.3MB/44.41MB 07:45:00 eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB 07:45:00 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB 07:45:00 6ac0e4adf315 Extracting [=====================> ] 26.18MB/62.07MB 07:45:00 eabd8714fec9 Downloading [===============> ] 114.6MB/375MB 07:45:00 c49e0ee60bfb Extracting [=================================> ] 72.42MB/107.3MB 07:45:00 7ce9630189bb Extracting [=======> ] 4.588MB/31.04MB 07:45:00 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 07:45:00 da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 07:45:00 eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 07:45:00 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 07:45:00 f3a82e9f1761 Downloading [===================> ] 16.97MB/44.41MB 07:45:00 55f2b468da67 Extracting [=================> ] 90.8MB/257.9MB 07:45:00 6ac0e4adf315 Extracting [========================> ] 30.08MB/62.07MB 07:45:00 eabd8714fec9 Downloading [================> ] 124.4MB/375MB 07:45:00 c49e0ee60bfb Extracting [==================================> ] 74.65MB/107.3MB 07:45:00 01e0882c90d9 Pull complete 07:45:00 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 07:45:00 7ce9630189bb Extracting [===========> ] 6.881MB/31.04MB 07:45:00 eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB 07:45:00 f3a82e9f1761 Downloading [=======================> ] 20.64MB/44.41MB 07:45:00 55f2b468da67 Extracting [===================> ] 98.6MB/257.9MB 07:45:00 da3ed5db7103 Downloading [===> ] 8.109MB/127.4MB 07:45:00 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 07:45:00 eabd8714fec9 Downloading [=================> ] 134.6MB/375MB 07:45:00 c49e0ee60bfb Extracting [===================================> ] 76.87MB/107.3MB 07:45:00 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 07:45:00 eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB 07:45:00 f3a82e9f1761 Downloading [===========================> ] 24.31MB/44.41MB 07:45:00 7ce9630189bb Extracting [==============> ] 8.847MB/31.04MB 07:45:00 55f2b468da67 Extracting [====================> ] 103.6MB/257.9MB 07:45:00 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB 07:45:00 da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB 07:45:00 eabd8714fec9 Downloading [===================> ] 145.4MB/375MB 07:45:00 c49e0ee60bfb Extracting [=====================================> ] 79.66MB/107.3MB 07:45:01 531ee2cf3c0c Extracting [=======================> ] 3.834MB/8.066MB 07:45:01 7ce9630189bb Extracting [================> ] 10.16MB/31.04MB 07:45:01 55f2b468da67 Extracting [====================> ] 107MB/257.9MB 07:45:01 f3a82e9f1761 Downloading [==================================> ] 30.74MB/44.41MB 07:45:01 eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB 07:45:01 da3ed5db7103 Downloading [=======> ] 18.38MB/127.4MB 07:45:01 eabd8714fec9 Downloading [=====================> ] 157.9MB/375MB 07:45:01 6ac0e4adf315 Extracting [=========================================> ] 51.81MB/62.07MB 07:45:01 c49e0ee60bfb Extracting [=====================================> ] 81.33MB/107.3MB 07:45:01 531ee2cf3c0c Extracting [===============================> ] 5.112MB/8.066MB 07:45:01 f3a82e9f1761 Downloading [===============================================> ] 41.75MB/44.41MB 07:45:01 7ce9630189bb Extracting [====================> ] 12.45MB/31.04MB 07:45:01 eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB 07:45:01 da3ed5db7103 Downloading [==========> ] 27.57MB/127.4MB 07:45:01 55f2b468da67 Extracting [=====================> ] 109.2MB/257.9MB 07:45:01 eabd8714fec9 Downloading [======================> ] 171.4MB/375MB 07:45:01 6ac0e4adf315 Extracting [===============================================> ] 58.49MB/62.07MB 07:45:01 f3a82e9f1761 Verifying Checksum 07:45:01 f3a82e9f1761 Download complete 07:45:01 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 07:45:01 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 07:45:01 c955f6e31a04 Verifying Checksum 07:45:01 c955f6e31a04 Download complete 07:45:01 c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB 07:45:01 531ee2cf3c0c Extracting [===========================================> ] 6.98MB/8.066MB 07:45:01 7ce9630189bb Extracting [=============================> ] 18.35MB/31.04MB 07:45:01 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 07:45:01 da3ed5db7103 Downloading [===============> ] 39.47MB/127.4MB 07:45:01 eabd8714fec9 Downloading [========================> ] 181.1MB/375MB 07:45:01 eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 07:45:01 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 07:45:01 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 07:45:01 c49e0ee60bfb Extracting [=========================================> ] 88.01MB/107.3MB 07:45:01 eabd8714fec9 Downloading [=========================> ] 192.5MB/375MB 07:45:01 da3ed5db7103 Downloading [===================> ] 49.2MB/127.4MB 07:45:01 7ce9630189bb Extracting [=================================> ] 20.97MB/31.04MB 07:45:01 55f2b468da67 Extracting [======================> ] 115.3MB/257.9MB 07:45:01 eca0188f477e Extracting [===============================================> ] 35MB/37.17MB 07:45:01 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 07:45:01 c49e0ee60bfb Extracting [==========================================> ] 90.8MB/107.3MB 07:45:01 eabd8714fec9 Downloading [===========================> ] 204.4MB/375MB 07:45:01 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 07:45:01 da3ed5db7103 Downloading [=======================> ] 60.01MB/127.4MB 07:45:01 7ce9630189bb Extracting [====================================> ] 22.61MB/31.04MB 07:45:01 55f2b468da67 Extracting [=======================> ] 118.7MB/257.9MB 07:45:01 eca0188f477e Extracting [=================================================> ] 36.96MB/37.17MB 07:45:01 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 07:45:01 c49e0ee60bfb Extracting [===========================================> ] 93.59MB/107.3MB 07:45:01 eabd8714fec9 Downloading [============================> ] 217.3MB/375MB 07:45:01 da3ed5db7103 Downloading [============================> ] 73.53MB/127.4MB 07:45:01 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB 07:45:01 55f2b468da67 Extracting [=======================> ] 122MB/257.9MB 07:45:01 7ce9630189bb Extracting [========================================> ] 24.9MB/31.04MB 07:45:01 eabd8714fec9 Downloading [==============================> ] 231.9MB/375MB 07:45:01 da3ed5db7103 Downloading [==================================> ] 88.13MB/127.4MB 07:45:01 c49e0ee60bfb Extracting [=============================================> ] 97.48MB/107.3MB 07:45:01 19ede2622bd6 Downloading [======> ] 8.65MB/71.91MB 07:45:01 55f2b468da67 Extracting [========================> ] 125.9MB/257.9MB 07:45:01 531ee2cf3c0c Pull complete 07:45:01 eabd8714fec9 Downloading [================================> ] 243.3MB/375MB 07:45:01 7ce9630189bb Extracting [============================================> ] 27.85MB/31.04MB 07:45:01 c49e0ee60bfb Extracting [===============================================> ] 100.8MB/107.3MB 07:45:01 da3ed5db7103 Downloading [======================================> ] 97.86MB/127.4MB 07:45:01 19ede2622bd6 Downloading [=============> ] 18.92MB/71.91MB 07:45:01 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB 07:45:01 eca0188f477e Pull complete 07:45:01 6ac0e4adf315 Pull complete 07:45:01 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 07:45:01 eabd8714fec9 Downloading [=================================> ] 252MB/375MB 07:45:01 7ce9630189bb Extracting [==============================================> ] 28.84MB/31.04MB 07:45:01 da3ed5db7103 Downloading [===========================================> ] 110.3MB/127.4MB 07:45:01 19ede2622bd6 Downloading [======================> ] 31.9MB/71.91MB 07:45:01 7ce9630189bb Extracting [==================================================>] 31.04MB/31.04MB 07:45:01 55f2b468da67 Extracting [=========================> ] 130.9MB/257.9MB 07:45:01 c49e0ee60bfb Extracting [================================================> ] 103.1MB/107.3MB 07:45:01 eabd8714fec9 Downloading [===================================> ] 262.8MB/375MB 07:45:01 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 07:45:01 da3ed5db7103 Downloading [===============================================> ] 121.7MB/127.4MB 07:45:02 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 07:45:02 19ede2622bd6 Downloading [==========================> ] 37.85MB/71.91MB 07:45:02 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 07:45:02 da3ed5db7103 Verifying Checksum 07:45:02 da3ed5db7103 Download complete 07:45:02 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB 07:45:02 c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB 07:45:02 eabd8714fec9 Downloading [=====================================> ] 278.4MB/375MB 07:45:02 19ede2622bd6 Downloading [=====================================> ] 53.53MB/71.91MB 07:45:02 55f2b468da67 Extracting [==========================> ] 138.7MB/257.9MB 07:45:02 eabd8714fec9 Downloading [=======================================> ] 294.1MB/375MB 07:45:02 c49e0ee60bfb Extracting [=================================================> ] 105.8MB/107.3MB 07:45:02 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 07:45:02 19ede2622bd6 Downloading [================================================> ] 70.29MB/71.91MB 07:45:02 19ede2622bd6 Download complete 07:45:02 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 07:45:02 eabd8714fec9 Downloading [=========================================> ] 312MB/375MB 07:45:02 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB 07:45:02 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 07:45:02 81f92f6326a0 Downloading [=====================================> ] 11.06MB/14.63MB 07:45:02 81f92f6326a0 Verifying Checksum 07:45:02 81f92f6326a0 Download complete 07:45:02 eabd8714fec9 Downloading [===========================================> ] 328.7MB/375MB 07:45:02 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB 07:45:02 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 07:45:02 774184111a51 Verifying Checksum 07:45:02 774184111a51 Download complete 07:45:02 19ede2622bd6 Extracting [===> ] 4.456MB/71.91MB 07:45:02 eabd8714fec9 Downloading [==============================================> ] 345.5MB/375MB 07:45:02 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB 07:45:02 19ede2622bd6 Extracting [======> ] 9.47MB/71.91MB 07:45:02 ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB 07:45:02 ba3bfa42d232 Downloading [==================================================>] 5.244kB/5.244kB 07:45:02 ba3bfa42d232 Verifying Checksum 07:45:02 ba3bfa42d232 Download complete 07:45:02 eabd8714fec9 Downloading [================================================> ] 361.7MB/375MB 07:45:02 55f2b468da67 Extracting [==============================> ] 154.9MB/257.9MB 07:45:02 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 07:45:02 43449fa9f0bf Downloading [==================================================>] 1.037kB/1.037kB 07:45:02 43449fa9f0bf Verifying Checksum 07:45:02 43449fa9f0bf Download complete 07:45:02 8e7191d1a9d6 Verifying Checksum 07:45:02 8e7191d1a9d6 Download complete 07:45:02 e444bcd4d577 Extracting [==================================================>] 279B/279B 07:45:02 e444bcd4d577 Extracting [==================================================>] 279B/279B 07:45:03 eabd8714fec9 Downloading [================================================> ] 363.3MB/375MB 07:45:03 19ede2622bd6 Extracting [========> ] 12.26MB/71.91MB 07:45:03 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB 07:45:03 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 07:45:03 eabd8714fec9 Downloading [================================================> ] 365.5MB/375MB 07:45:03 25fd4437207e Downloading [==================================================>] 19.52kB/19.52kB 07:45:03 25fd4437207e Verifying Checksum 07:45:03 25fd4437207e Download complete 07:45:03 19ede2622bd6 Extracting [============> ] 17.27MB/71.91MB 07:45:03 eabd8714fec9 Verifying Checksum 07:45:03 eabd8714fec9 Download complete 07:45:03 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 07:45:03 19ede2622bd6 Extracting [================> ] 23.95MB/71.91MB 07:45:03 55f2b468da67 Extracting [===============================> ] 163.8MB/257.9MB 07:45:03 19ede2622bd6 Extracting [====================> ] 30.08MB/71.91MB 07:45:03 55f2b468da67 Extracting [================================> ] 168.2MB/257.9MB 07:45:03 19ede2622bd6 Extracting [=========================> ] 36.21MB/71.91MB 07:45:03 19ede2622bd6 Extracting [============================> ] 40.67MB/71.91MB 07:45:03 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 07:45:03 19ede2622bd6 Extracting [===============================> ] 45.12MB/71.91MB 07:45:03 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 07:45:03 19ede2622bd6 Extracting [==================================> ] 50.14MB/71.91MB 07:45:03 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 07:45:03 19ede2622bd6 Extracting [=====================================> ] 53.48MB/71.91MB 07:45:03 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 07:45:04 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 07:45:04 19ede2622bd6 Extracting [=======================================> ] 57.38MB/71.91MB 07:45:04 f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 07:45:04 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 07:45:04 19ede2622bd6 Extracting [===========================================> ] 62.39MB/71.91MB 07:45:04 f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 07:45:04 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 07:45:04 19ede2622bd6 Extracting [===============================================> ] 67.96MB/71.91MB 07:45:04 c49e0ee60bfb Pull complete 07:45:04 ed54a7dee1d8 Pull complete 07:45:04 7ce9630189bb Pull complete 07:45:04 f3b09c502777 Extracting [===========> ] 13.37MB/56.52MB 07:45:04 19ede2622bd6 Extracting [================================================> ] 69.63MB/71.91MB 07:45:04 e444bcd4d577 Pull complete 07:45:04 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB 07:45:04 f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 07:45:04 19ede2622bd6 Extracting [================================================> ] 70.19MB/71.91MB 07:45:04 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB 07:45:04 f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB 07:45:04 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 07:45:04 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 07:45:04 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB 07:45:04 f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB 07:45:05 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB 07:45:05 2d7f854c01cf Extracting [==================================================>] 372B/372B 07:45:05 f3b09c502777 Extracting [===============================> ] 35.65MB/56.52MB 07:45:05 384497dbce3b Extracting [> ] 557.1kB/63.48MB 07:45:05 f3b09c502777 Extracting [======================================> ] 43.45MB/56.52MB 07:45:05 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 07:45:05 12c5c803443f Extracting [==================================================>] 116B/116B 07:45:05 12c5c803443f Extracting [==================================================>] 116B/116B 07:45:05 f3b09c502777 Extracting [===========================================> ] 49.58MB/56.52MB 07:45:05 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 07:45:05 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 07:45:05 384497dbce3b Extracting [> ] 1.114MB/63.48MB 07:45:05 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 07:45:05 eabd8714fec9 Extracting [> ] 557.1kB/375MB 07:45:05 eabd8714fec9 Extracting [> ] 4.456MB/375MB 07:45:06 eabd8714fec9 Extracting [> ] 5.571MB/375MB 07:45:06 19ede2622bd6 Pull complete 07:45:06 2d7f854c01cf Pull complete 07:45:06 f3b09c502777 Pull complete 07:45:06 12c5c803443f Pull complete 07:45:06 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB 07:45:06 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 07:45:06 408012a7b118 Extracting [==================================================>] 637B/637B 07:45:06 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 07:45:06 408012a7b118 Extracting [==================================================>] 637B/637B 07:45:06 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 07:45:06 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 07:45:06 eabd8714fec9 Extracting [=> ] 13.37MB/375MB 07:45:06 8e665a4a2af9 Extracting [> ] 557.1kB/107.2MB 07:45:06 eabd8714fec9 Extracting [=> ] 14.48MB/375MB 07:45:06 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 07:45:06 81f92f6326a0 Extracting [=> ] 327.7kB/14.63MB 07:45:06 eabd8714fec9 Extracting [==> ] 15.04MB/375MB 07:45:06 8e665a4a2af9 Extracting [=> ] 2.228MB/107.2MB 07:45:06 81f92f6326a0 Extracting [=> ] 491.5kB/14.63MB 07:45:06 55f2b468da67 Extracting [======================================> ] 200.5MB/257.9MB 07:45:06 eabd8714fec9 Extracting [==> ] 19.5MB/375MB 07:45:06 8e665a4a2af9 Extracting [=====> ] 11.7MB/107.2MB 07:45:06 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 07:45:06 81f92f6326a0 Extracting [============> ] 3.768MB/14.63MB 07:45:06 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 07:45:06 8e665a4a2af9 Extracting [============> ] 26.74MB/107.2MB 07:45:06 81f92f6326a0 Extracting [===============> ] 4.588MB/14.63MB 07:45:06 eabd8714fec9 Extracting [==> ] 21.17MB/375MB 07:45:07 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB 07:45:07 8e665a4a2af9 Extracting [==================> ] 39.55MB/107.2MB 07:45:07 81f92f6326a0 Extracting [=================> ] 5.079MB/14.63MB 07:45:07 eabd8714fec9 Extracting [==> ] 22.28MB/375MB 07:45:07 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 07:45:07 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 07:45:07 8e665a4a2af9 Extracting [=======================> ] 51.25MB/107.2MB 07:45:07 81f92f6326a0 Extracting [======================> ] 6.717MB/14.63MB 07:45:07 408012a7b118 Pull complete 07:45:07 e27c75a98748 Pull complete 07:45:07 81f92f6326a0 Extracting [==========================> ] 7.864MB/14.63MB 07:45:07 8e665a4a2af9 Extracting [=============================> ] 63.5MB/107.2MB 07:45:07 8e665a4a2af9 Extracting [====================================> ] 79.1MB/107.2MB 07:45:07 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 07:45:07 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 07:45:07 8e665a4a2af9 Extracting [===========================================> ] 94.14MB/107.2MB 07:45:07 384497dbce3b Extracting [===> ] 3.899MB/63.48MB 07:45:07 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB 07:45:07 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 07:45:07 8e665a4a2af9 Extracting [=============================================> ] 97.48MB/107.2MB 07:45:07 81f92f6326a0 Extracting [===========================> ] 8.192MB/14.63MB 07:45:08 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 07:45:08 8e665a4a2af9 Extracting [===============================================> ] 101.9MB/107.2MB 07:45:08 eabd8714fec9 Extracting [===> ] 27.3MB/375MB 07:45:08 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 07:45:08 81f92f6326a0 Extracting [=============================> ] 8.52MB/14.63MB 07:45:08 8e665a4a2af9 Extracting [================================================> ] 103.6MB/107.2MB 07:45:08 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 07:45:08 8e665a4a2af9 Extracting [==================================================>] 107.2MB/107.2MB 07:45:08 e73cb4a42719 Extracting [> ] 1.671MB/109.1MB 07:45:08 81f92f6326a0 Extracting [=================================> ] 9.83MB/14.63MB 07:45:08 eabd8714fec9 Extracting [====> ] 33.98MB/375MB 07:45:08 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 07:45:08 e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB 07:45:08 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 07:45:08 81f92f6326a0 Extracting [======================================> ] 11.3MB/14.63MB 07:45:08 eabd8714fec9 Extracting [=====> ] 42.89MB/375MB 07:45:08 55f2b468da67 Extracting [========================================> ] 210MB/257.9MB 07:45:08 384497dbce3b Extracting [====> ] 6.128MB/63.48MB 07:45:08 e73cb4a42719 Extracting [==> ] 5.571MB/109.1MB 07:45:08 eabd8714fec9 Extracting [======> ] 45.68MB/375MB 07:45:08 81f92f6326a0 Extracting [=======================================> ] 11.63MB/14.63MB 07:45:08 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 07:45:08 eabd8714fec9 Extracting [=======> ] 55.71MB/375MB 07:45:08 81f92f6326a0 Extracting [=========================================> ] 12.12MB/14.63MB 07:45:08 eabd8714fec9 Extracting [=======> ] 58.49MB/375MB 07:45:08 e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB 07:45:08 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 07:45:08 44986281b8b9 Pull complete 07:45:08 8e665a4a2af9 Pull complete 07:45:08 81f92f6326a0 Extracting [============================================> ] 12.94MB/14.63MB 07:45:08 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 07:45:08 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 07:45:08 eabd8714fec9 Extracting [========> ] 61.28MB/375MB 07:45:08 e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB 07:45:08 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 07:45:09 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 07:45:09 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 07:45:09 eabd8714fec9 Extracting [========> ] 65.73MB/375MB 07:45:09 e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 07:45:09 81f92f6326a0 Pull complete 07:45:09 384497dbce3b Extracting [=======> ] 8.913MB/63.48MB 07:45:09 eabd8714fec9 Extracting [=========> ] 69.63MB/375MB 07:45:09 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 07:45:09 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 07:45:09 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 07:45:09 219d845251ba Extracting [> ] 557.1kB/108.2MB 07:45:09 e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 07:45:09 eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 07:45:09 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 07:45:09 219d845251ba Extracting [==> ] 5.571MB/108.2MB 07:45:09 e73cb4a42719 Extracting [======> ] 15.04MB/109.1MB 07:45:09 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB 07:45:09 eabd8714fec9 Extracting [==========> ] 79.66MB/375MB 07:45:09 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 07:45:09 bf70c5107ab5 Pull complete 07:45:09 e73cb4a42719 Extracting [=======> ] 16.71MB/109.1MB 07:45:09 219d845251ba Extracting [=====> ] 12.81MB/108.2MB 07:45:09 eabd8714fec9 Extracting [===========> ] 88.01MB/375MB 07:45:09 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB 07:45:09 219d845251ba Extracting [======> ] 14.48MB/108.2MB 07:45:09 384497dbce3b Extracting [========> ] 11.14MB/63.48MB 07:45:09 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 07:45:09 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 07:45:09 e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 07:45:09 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB 07:45:09 eabd8714fec9 Extracting [===========> ] 89.13MB/375MB 07:45:09 219d845251ba Extracting [========> ] 17.83MB/108.2MB 07:45:09 55f2b468da67 Extracting [==========================================> ] 217.8MB/257.9MB 07:45:09 e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB 07:45:09 eabd8714fec9 Extracting [============> ] 95.81MB/375MB 07:45:09 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 07:45:09 219d845251ba Extracting [==========> ] 23.4MB/108.2MB 07:45:09 384497dbce3b Extracting [==========> ] 13.93MB/63.48MB 07:45:09 eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 07:45:09 e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB 07:45:09 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB 07:45:09 219d845251ba Extracting [============> ] 26.18MB/108.2MB 07:45:10 eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 07:45:10 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 07:45:10 384497dbce3b Extracting [============> ] 15.6MB/63.48MB 07:45:10 e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB 07:45:10 219d845251ba Extracting [==============> ] 31.75MB/108.2MB 07:45:10 774184111a51 Pull complete 07:45:10 219d845251ba Extracting [===============> ] 32.87MB/108.2MB 07:45:10 e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 07:45:10 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 07:45:10 eabd8714fec9 Extracting [==============> ] 106.4MB/375MB 07:45:10 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 07:45:10 219d845251ba Extracting [================> ] 36.77MB/108.2MB 07:45:10 e73cb4a42719 Extracting [=============> ] 28.97MB/109.1MB 07:45:10 55f2b468da67 Extracting [===========================================> ] 225.1MB/257.9MB 07:45:10 eabd8714fec9 Extracting [==============> ] 108.1MB/375MB 07:45:10 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 07:45:10 1ccde423731d Pull complete 07:45:10 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 07:45:10 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 07:45:10 219d845251ba Extracting [=====================> ] 46.79MB/108.2MB 07:45:10 219d845251ba Extracting [=====================> ] 47.35MB/108.2MB 07:45:10 e73cb4a42719 Extracting [==============> ] 32.31MB/109.1MB 07:45:10 eabd8714fec9 Extracting [==============> ] 109.7MB/375MB 07:45:10 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 07:45:10 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 07:45:10 e73cb4a42719 Extracting [===============> ] 33.42MB/109.1MB 07:45:10 219d845251ba Extracting [==========================> ] 56.82MB/108.2MB 07:45:10 384497dbce3b Extracting [==============> ] 17.83MB/63.48MB 07:45:10 eabd8714fec9 Extracting [===============> ] 113.1MB/375MB 07:45:10 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 07:45:10 219d845251ba Extracting [==============================> ] 65.73MB/108.2MB 07:45:10 e73cb4a42719 Extracting [================> ] 36.77MB/109.1MB 07:45:10 7221d93db8a9 Extracting [==================================================>] 100B/100B 07:45:10 7221d93db8a9 Extracting [==================================================>] 100B/100B 07:45:10 384497dbce3b Extracting [================> ] 20.61MB/63.48MB 07:45:10 eabd8714fec9 Extracting [===============> ] 115.9MB/375MB 07:45:10 219d845251ba Extracting [================================> ] 69.63MB/108.2MB 07:45:10 e73cb4a42719 Extracting [=================> ] 38.44MB/109.1MB 07:45:10 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 07:45:10 384497dbce3b Extracting [=================> ] 22.28MB/63.48MB 07:45:11 219d845251ba Extracting [===================================> ] 77.43MB/108.2MB 07:45:11 eabd8714fec9 Extracting [===============> ] 118.7MB/375MB 07:45:11 e73cb4a42719 Extracting [==================> ] 40.11MB/109.1MB 07:45:11 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB 07:45:11 384497dbce3b Extracting [==================> ] 23.95MB/63.48MB 07:45:11 219d845251ba Extracting [======================================> ] 83.56MB/108.2MB 07:45:11 e73cb4a42719 Extracting [===================> ] 42.89MB/109.1MB 07:45:11 eabd8714fec9 Extracting [================> ] 122MB/375MB 07:45:11 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 07:45:11 384497dbce3b Extracting [=====================> ] 26.74MB/63.48MB 07:45:11 219d845251ba Extracting [==========================================> ] 92.47MB/108.2MB 07:45:11 e73cb4a42719 Extracting [=====================> ] 46.24MB/109.1MB 07:45:11 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 07:45:11 384497dbce3b Extracting [======================> ] 28.97MB/63.48MB 07:45:11 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 07:45:11 219d845251ba Extracting [===============================================> ] 102.5MB/108.2MB 07:45:11 e73cb4a42719 Extracting [======================> ] 48.46MB/109.1MB 07:45:11 eabd8714fec9 Extracting [=================> ] 128.1MB/375MB 07:45:11 219d845251ba Extracting [==================================================>] 108.2MB/108.2MB 07:45:11 384497dbce3b Extracting [========================> ] 30.64MB/63.48MB 07:45:11 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 07:45:11 e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 07:45:11 eabd8714fec9 Extracting [=================> ] 132MB/375MB 07:45:11 384497dbce3b Extracting [=========================> ] 32.31MB/63.48MB 07:45:11 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 07:45:11 eabd8714fec9 Extracting [==================> ] 135.4MB/375MB 07:45:11 e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 07:45:11 eabd8714fec9 Extracting [==================> ] 135.9MB/375MB 07:45:11 e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB 07:45:11 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB 07:45:11 384497dbce3b Extracting [===========================> ] 34.54MB/63.48MB 07:45:11 eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 07:45:11 e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 07:45:11 384497dbce3b Extracting [=============================> ] 37.32MB/63.48MB 07:45:11 eabd8714fec9 Extracting [===================> ] 143.2MB/375MB 07:45:11 e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 07:45:11 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 07:45:11 384497dbce3b Extracting [===============================> ] 40.11MB/63.48MB 07:45:12 eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 07:45:12 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 07:45:12 55f2b468da67 Extracting [================================================> ] 252.3MB/257.9MB 07:45:12 384497dbce3b Extracting [=================================> ] 42.89MB/63.48MB 07:45:12 ba3bfa42d232 Pull complete 07:45:12 e73cb4a42719 Extracting [============================> ] 61.28MB/109.1MB 07:45:12 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 07:45:12 eabd8714fec9 Extracting [===================> ] 148.7MB/375MB 07:45:12 384497dbce3b Extracting [==================================> ] 43.45MB/63.48MB 07:45:12 e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 07:45:12 7221d93db8a9 Pull complete 07:45:12 e73cb4a42719 Extracting [============================> ] 62.95MB/109.1MB 07:45:12 eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 07:45:12 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB 07:45:12 384497dbce3b Extracting [==================================> ] 44.01MB/63.48MB 07:45:12 e73cb4a42719 Extracting [===============================> ] 68.52MB/109.1MB 07:45:12 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB 07:45:12 384497dbce3b Extracting [=====================================> ] 47.91MB/63.48MB 07:45:12 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 07:45:12 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 07:45:12 eabd8714fec9 Extracting [====================> ] 151.5MB/375MB 07:45:12 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 07:45:12 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 07:45:12 e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB 07:45:12 384497dbce3b Extracting [======================================> ] 48.46MB/63.48MB 07:45:12 eabd8714fec9 Extracting [====================> ] 154.3MB/375MB 07:45:12 e73cb4a42719 Extracting [=================================> ] 73.53MB/109.1MB 07:45:12 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB 07:45:13 219d845251ba Pull complete 07:45:13 e73cb4a42719 Extracting [===================================> ] 77.43MB/109.1MB 07:45:13 384497dbce3b Extracting [========================================> ] 51.25MB/63.48MB 07:45:13 eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 07:45:13 e73cb4a42719 Extracting [=====================================> ] 81.89MB/109.1MB 07:45:13 eabd8714fec9 Extracting [=====================> ] 159.9MB/375MB 07:45:13 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB 07:45:13 e73cb4a42719 Extracting [========================================> ] 88.57MB/109.1MB 07:45:13 eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB 07:45:13 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 07:45:13 e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 07:45:13 eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 07:45:13 eabd8714fec9 Extracting [=======================> ] 177.1MB/375MB 07:45:13 e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 07:45:13 eabd8714fec9 Extracting [========================> ] 187.2MB/375MB 07:45:13 7df673c7455d Extracting [==================================================>] 694B/694B 07:45:13 7df673c7455d Extracting [==================================================>] 694B/694B 07:45:13 e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB 07:45:13 eabd8714fec9 Extracting [==========================> ] 198.3MB/375MB 07:45:13 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 07:45:13 e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 07:45:13 384497dbce3b Extracting [================================================> ] 61.28MB/63.48MB 07:45:13 eabd8714fec9 Extracting [==========================> ] 200MB/375MB 07:45:13 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 07:45:13 eabd8714fec9 Extracting [===========================> ] 207.2MB/375MB 07:45:13 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 07:45:13 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 07:45:14 8e7191d1a9d6 Pull complete 07:45:14 55f2b468da67 Pull complete 07:45:14 e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB 07:45:14 eabd8714fec9 Extracting [============================> ] 214.5MB/375MB 07:45:14 e73cb4a42719 Extracting [==============================================> ] 101.4MB/109.1MB 07:45:14 eabd8714fec9 Extracting [=============================> ] 218.4MB/375MB 07:45:14 e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 07:45:14 eabd8714fec9 Extracting [=============================> ] 221.7MB/375MB 07:45:14 e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 07:45:14 eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 07:45:14 eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB 07:45:14 82bfc142787e Extracting [> ] 98.3kB/8.613MB 07:45:14 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 07:45:14 eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB 07:45:14 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 07:45:14 eabd8714fec9 Extracting [===============================> ] 236.2MB/375MB 07:45:14 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 07:45:14 82bfc142787e Extracting [================================================> ] 8.356MB/8.613MB 07:45:14 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 07:45:14 eabd8714fec9 Extracting [===============================> ] 239MB/375MB 07:45:15 eabd8714fec9 Extracting [================================> ] 244MB/375MB 07:45:15 eabd8714fec9 Extracting [=================================> ] 249MB/375MB 07:45:15 eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 07:45:15 eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 07:45:15 eabd8714fec9 Extracting [===================================> ] 264.6MB/375MB 07:45:15 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 07:45:15 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 07:45:15 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 07:45:15 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 07:45:15 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 07:45:15 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 07:45:16 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 07:45:16 eabd8714fec9 Extracting [====================================> ] 277.4MB/375MB 07:45:16 eabd8714fec9 Extracting [======================================> ] 285.2MB/375MB 07:45:16 eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 07:45:16 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 07:45:16 7df673c7455d Pull complete 07:45:16 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 07:45:16 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 07:45:16 eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 07:45:17 eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB 07:45:17 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 07:45:17 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 07:45:17 eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 07:45:17 384497dbce3b Pull complete 07:45:17 drools-pdp Pulled 07:45:17 eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB 07:45:18 82bfc142787e Pull complete 07:45:18 e73cb4a42719 Pull complete 07:45:18 eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB 07:45:18 eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 07:45:18 43449fa9f0bf Pull complete 07:45:18 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 07:45:18 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 07:45:18 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 07:45:18 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 07:45:18 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 07:45:18 prometheus Pulled 07:45:18 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 07:45:18 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 07:45:18 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 07:45:18 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 07:45:18 055b9255fa03 Pull complete 07:45:18 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 07:45:18 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 07:45:18 a83b68436f09 Pull complete 07:45:18 787d6bee9571 Extracting [==================================================>] 127B/127B 07:45:18 787d6bee9571 Extracting [==================================================>] 127B/127B 07:45:18 46baca71a4ef Pull complete 07:45:18 eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 07:45:18 25fd4437207e Pull complete 07:45:18 policy-db-migrator Pulled 07:45:18 b176d7edde70 Pull complete 07:45:18 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 07:45:18 787d6bee9571 Pull complete 07:45:18 13ff0988aaea Extracting [==================================================>] 167B/167B 07:45:18 13ff0988aaea Extracting [==================================================>] 167B/167B 07:45:18 grafana Pulled 07:45:18 eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 07:45:18 b0e0ef7895f4 Extracting [=========> ] 6.685MB/37.01MB 07:45:18 13ff0988aaea Pull complete 07:45:18 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 07:45:18 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 07:45:18 eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 07:45:18 b0e0ef7895f4 Extracting [======================> ] 16.91MB/37.01MB 07:45:18 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 07:45:18 4b82842ab819 Pull complete 07:45:18 7e568a0dc8fb Extracting [==================================================>] 184B/184B 07:45:18 7e568a0dc8fb Extracting [==================================================>] 184B/184B 07:45:18 b0e0ef7895f4 Extracting [===================================> ] 26.35MB/37.01MB 07:45:18 eabd8714fec9 Extracting [============================================> ] 333.1MB/375MB 07:45:18 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 07:45:18 7e568a0dc8fb Pull complete 07:45:18 b0e0ef7895f4 Pull complete 07:45:18 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 07:45:18 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 07:45:18 postgres Pulled 07:45:18 eabd8714fec9 Extracting [=============================================> ] 337.6MB/375MB 07:45:19 c0c90eeb8aca Pull complete 07:45:19 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 07:45:19 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 07:45:19 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 07:45:19 5cfb27c10ea5 Pull complete 07:45:19 40a5eed61bb0 Extracting [==================================================>] 98B/98B 07:45:19 40a5eed61bb0 Extracting [==================================================>] 98B/98B 07:45:19 40a5eed61bb0 Pull complete 07:45:19 e040ea11fa10 Extracting [==================================================>] 173B/173B 07:45:19 e040ea11fa10 Extracting [==================================================>] 173B/173B 07:45:19 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 07:45:19 e040ea11fa10 Pull complete 07:45:19 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 07:45:19 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 07:45:19 09d5a3f70313 Extracting [=====> ] 11.14MB/109.2MB 07:45:19 eabd8714fec9 Extracting [=============================================> ] 344.8MB/375MB 07:45:19 09d5a3f70313 Extracting [=========> ] 21.73MB/109.2MB 07:45:19 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 07:45:19 09d5a3f70313 Extracting [===============> ] 32.87MB/109.2MB 07:45:19 eabd8714fec9 Extracting [==============================================> ] 350.4MB/375MB 07:45:19 09d5a3f70313 Extracting [======================> ] 49.02MB/109.2MB 07:45:20 eabd8714fec9 Extracting [===============================================> ] 354.3MB/375MB 07:45:20 09d5a3f70313 Extracting [==============================> ] 66.29MB/109.2MB 07:45:20 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 07:45:20 09d5a3f70313 Extracting [=====================================> ] 82.44MB/109.2MB 07:45:20 eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 07:45:20 09d5a3f70313 Extracting [============================================> ] 97.48MB/109.2MB 07:45:20 eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB 07:45:20 09d5a3f70313 Extracting [================================================> ] 105.8MB/109.2MB 07:45:20 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 07:45:20 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 07:45:20 eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB 07:45:20 09d5a3f70313 Pull complete 07:45:20 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 07:45:20 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 07:45:20 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 07:45:20 356f5c2c843b Pull complete 07:45:20 kafka Pulled 07:45:20 eabd8714fec9 Pull complete 07:45:20 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 07:45:20 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 07:45:20 45fd2fec8a19 Pull complete 07:45:20 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 07:45:21 8f10199ed94b Extracting [========================> ] 4.227MB/8.768MB 07:45:21 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 07:45:21 8f10199ed94b Pull complete 07:45:21 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 07:45:21 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 07:45:21 f963a77d2726 Pull complete 07:45:21 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 07:45:21 f3a82e9f1761 Extracting [==================> ] 16.06MB/44.41MB 07:45:21 f3a82e9f1761 Extracting [===================================> ] 31.65MB/44.41MB 07:45:21 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 07:45:21 f3a82e9f1761 Pull complete 07:45:21 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 07:45:21 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 07:45:21 79161a3f5362 Pull complete 07:45:21 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 07:45:21 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 07:45:21 9c266ba63f51 Pull complete 07:45:21 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 07:45:21 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 07:45:21 2e8a7df9c2ee Pull complete 07:45:21 10f05dd8b1db Extracting [==================================================>] 98B/98B 07:45:21 10f05dd8b1db Extracting [==================================================>] 98B/98B 07:45:22 10f05dd8b1db Pull complete 07:45:22 41dac8b43ba6 Extracting [==================================================>] 171B/171B 07:45:22 41dac8b43ba6 Extracting [==================================================>] 171B/171B 07:45:22 41dac8b43ba6 Pull complete 07:45:22 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 07:45:22 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 07:45:22 71a9f6a9ab4d Pull complete 07:45:22 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 07:45:22 da3ed5db7103 Extracting [======> ] 17.83MB/127.4MB 07:45:22 da3ed5db7103 Extracting [============> ] 32.87MB/127.4MB 07:45:22 da3ed5db7103 Extracting [===================> ] 50.69MB/127.4MB 07:45:22 da3ed5db7103 Extracting [===========================> ] 69.07MB/127.4MB 07:45:22 da3ed5db7103 Extracting [==================================> ] 88.01MB/127.4MB 07:45:22 da3ed5db7103 Extracting [=========================================> ] 106.4MB/127.4MB 07:45:23 da3ed5db7103 Extracting [==============================================> ] 119.2MB/127.4MB 07:45:23 da3ed5db7103 Extracting [================================================> ] 124.2MB/127.4MB 07:45:23 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 07:45:23 da3ed5db7103 Pull complete 07:45:23 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 07:45:23 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 07:45:23 c955f6e31a04 Pull complete 07:45:23 zookeeper Pulled 07:45:23 Network compose_default Creating 07:45:23 Network compose_default Created 07:45:23 Container postgres Creating 07:45:23 Container prometheus Creating 07:45:23 Container zookeeper Creating 07:45:39 Container postgres Created 07:45:39 Container policy-db-migrator Creating 07:45:39 Container zookeeper Created 07:45:39 Container kafka Creating 07:45:39 Container prometheus Created 07:45:39 Container grafana Creating 07:45:39 Container grafana Created 07:45:39 Container policy-db-migrator Created 07:45:39 Container kafka Created 07:45:39 Container policy-api Creating 07:45:39 Container policy-api Created 07:45:39 Container policy-pap Creating 07:45:39 Container policy-pap Created 07:45:39 Container policy-drools-pdp Creating 07:45:39 Container policy-drools-pdp Created 07:45:39 Container zookeeper Starting 07:45:39 Container prometheus Starting 07:45:39 Container postgres Starting 07:45:40 Container prometheus Started 07:45:40 Container grafana Starting 07:45:41 Container grafana Started 07:45:41 Container postgres Started 07:45:41 Container policy-db-migrator Starting 07:45:42 Container policy-db-migrator Started 07:45:42 Container policy-api Starting 07:45:43 Container policy-api Started 07:45:43 Container zookeeper Started 07:45:43 Container kafka Starting 07:45:44 Container kafka Started 07:45:44 Container policy-pap Starting 07:45:45 Container policy-pap Started 07:45:45 Container policy-drools-pdp Starting 07:45:46 Container policy-drools-pdp Started 07:45:46 Prometheus server: http://localhost:30259 07:45:46 Grafana server: http://localhost:30269 07:45:46 Waiting 1 minute for drools-pdp to start... 07:46:46 Checking if REST port 30216 is open on localhost ... 07:46:46 IMAGE NAMES STATUS 07:46:46 nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute 07:46:46 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 07:46:46 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 07:46:46 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 07:46:46 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 07:46:46 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 07:46:46 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 07:46:46 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 07:46:46 Cloning into '/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/csit/resources/tests/models'... 07:46:47 Building robot framework docker image 07:47:24 sha256:b043114aebf5c2255928348057d6995ecb6be7bb634ff0624c736343a77baa35 07:47:28 top - 07:47:28 up 4 min, 0 users, load average: 2.73, 2.12, 0.93 07:47:28 Tasks: 229 total, 1 running, 152 sleeping, 0 stopped, 0 zombie 07:47:28 %Cpu(s): 14.0 us, 3.5 sy, 0.0 ni, 77.6 id, 4.8 wa, 0.0 hi, 0.1 si, 0.1 st 07:47:28 07:47:28 total used free shared buff/cache available 07:47:28 Mem: 31G 2.5G 21G 27M 7.7G 28G 07:47:28 Swap: 1.0G 0B 1.0G 07:47:28 07:47:28 IMAGE NAMES STATUS 07:47:28 nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute 07:47:28 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 07:47:28 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 07:47:28 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 07:47:28 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 07:47:28 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 07:47:28 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 07:47:28 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 07:47:28 07:47:30 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 07:47:30 fad76eb53b8d policy-drools-pdp 0.58% 278MiB / 31.41GiB 0.86% 32.3kB / 41kB 0B / 8.19kB 54 07:47:30 2a09993efacb policy-pap 5.90% 467.7MiB / 31.41GiB 1.45% 82.3kB / 125kB 0B / 139MB 68 07:47:30 1017de5fb2d9 policy-api 0.16% 402.8MiB / 31.41GiB 1.25% 1.14MB / 985kB 0B / 0B 58 07:47:30 47acda481bf4 kafka 5.31% 390.5MiB / 31.41GiB 1.21% 153kB / 137kB 0B / 590kB 83 07:47:30 c66dd0b20d09 grafana 0.19% 110.4MiB / 31.41GiB 0.34% 19.1MB / 197kB 0B / 30.5MB 19 07:47:30 22220c690cfa zookeeper 0.42% 83.33MiB / 31.41GiB 0.26% 52.7kB / 45.3kB 225kB / 373kB 63 07:47:30 26a07882f0bf postgres 0.02% 84.93MiB / 31.41GiB 0.26% 1.64MB / 1.71MB 0B / 158MB 26 07:47:30 aac296d49134 prometheus 0.00% 20.5MiB / 31.41GiB 0.06% 56.9kB / 2.49kB 4.1kB / 0B 13 07:47:30 07:47:30 Container policy-csit Creating 07:47:31 Container policy-csit Created 07:47:31 Attaching to policy-csit 07:47:31 policy-csit | Invoking the robot tests from: drools-pdp-test.robot 07:47:31 policy-csit | Run Robot test 07:47:31 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 07:47:31 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 07:47:31 policy-csit | -v POLICY_API_IP:policy-api:6969 07:47:31 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 07:47:31 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 07:47:31 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 07:47:31 policy-csit | -v APEX_IP:policy-apex-pdp:6969 07:47:31 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 07:47:31 policy-csit | -v KAFKA_IP:kafka:9092 07:47:31 policy-csit | -v PROMETHEUS_IP:prometheus:9090 07:47:31 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 07:47:31 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 07:47:31 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 07:47:31 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 07:47:31 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 07:47:31 policy-csit | -v TEMP_FOLDER:/tmp/distribution 07:47:31 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 07:47:31 policy-csit | -v TEST_ENV:docker 07:47:31 policy-csit | -v JAEGER_IP:jaeger:16686 07:47:31 policy-csit | Starting Robot test suites ... 07:47:32 policy-csit | ============================================================================== 07:47:32 policy-csit | Drools-Pdp-Test 07:47:32 policy-csit | ============================================================================== 07:47:32 policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | 07:47:32 policy-csit | ------------------------------------------------------------------------------ 07:47:32 policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | 07:47:32 policy-csit | ------------------------------------------------------------------------------ 07:47:32 policy-csit | Drools-Pdp-Test | PASS | 07:47:32 policy-csit | 2 tests, 2 passed, 0 failed 07:47:32 policy-csit | ============================================================================== 07:47:32 policy-csit | Output: /tmp/results/output.xml 07:47:32 policy-csit | Log: /tmp/results/log.html 07:47:32 policy-csit | Report: /tmp/results/report.html 07:47:32 policy-csit | RESULT: 0 07:47:32 policy-csit exited with code 0 07:47:32 IMAGE NAMES STATUS 07:47:32 nexus3.onap.org:10001/onap/policy-drools:3.2.1-SNAPSHOT policy-drools-pdp Up About a minute 07:47:32 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 07:47:32 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 07:47:32 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 07:47:32 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 07:47:32 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 07:47:32 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 07:47:32 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 07:47:32 Shut down started! 07:47:34 Collecting logs from docker compose containers... 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779330431Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T07:45:41Z 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779631294Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779643684Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779648024Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779651424Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779654774Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779657954Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779660714Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779664244Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779669054Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779671864Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779674754Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779677654Z level=info msg=Target target=[all] 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779683164Z level=info msg="Path Home" path=/usr/share/grafana 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779686504Z level=info msg="Path Data" path=/var/lib/grafana 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779689474Z level=info msg="Path Logs" path=/var/log/grafana 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779693324Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779696084Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 07:47:34 grafana | logger=settings t=2025-06-16T07:45:41.779698974Z level=info msg="App mode production" 07:47:34 grafana | logger=featuremgmt t=2025-06-16T07:45:41.780059348Z level=info msg=FeatureToggles alertingApiServer=true reportingUseRawTimeRange=true alertRuleRestore=true ssoSettingsSAML=true dashboardSceneSolo=true logsContextDatasourceUi=true dataplaneFrontendFallback=true ssoSettingsApi=true unifiedRequestLog=true promQLScope=true kubernetesClientDashboardsFolders=true influxdbBackendMigration=true logsExploreTableVisualisation=true lokiQuerySplitting=true prometheusAzureOverrideAudience=true failWrongDSUID=true preinstallAutoUpdate=true newFiltersUI=true alertingRuleVersionHistoryRestore=true panelMonitoring=true cloudWatchRoundUpEndTime=true pinNavItems=true dashboardSceneForViewers=true pluginsDetailsRightPanel=true azureMonitorPrometheusExemplars=true transformationsRedesign=true correlations=true nestedFolders=true cloudWatchCrossAccountQuerying=true lokiStructuredMetadata=true newDashboardSharingComponent=true groupToNestedTableTransformation=true logsInfiniteScrolling=true useSessionStorageForRedirection=true angularDeprecationUI=true alertingUIOptimizeReducer=true alertingQueryAndExpressionsStepMode=true recordedQueriesMulti=true publicDashboardsScene=true alertingRuleRecoverDeleted=true alertingNotificationsStepMode=true lokiQueryHints=true dashboardScene=true alertingSimplifiedRouting=true alertingRulePermanentlyDelete=true cloudWatchNewLabelParsing=true prometheusUsesCombobox=true awsAsyncQueryCaching=true lokiLabelNamesQueryApi=true tlsMemcached=true annotationPermissionUpdate=true alertingInsights=true logRowsPopoverMenu=true kubernetesPlaylists=true newPDFRendering=true dashgpt=true formatString=true azureMonitorEnableUserAuth=true externalCorePlugins=true recoveryThreshold=true unifiedStorageSearchPermissionFiltering=true grafanaconThemes=true logsPanelControls=true onPremToCloudMigrations=true addFieldFromCalculationStatFunctions=true 07:47:34 grafana | logger=sqlstore t=2025-06-16T07:45:41.780117359Z level=info msg="Connecting to DB" dbtype=sqlite3 07:47:34 grafana | logger=sqlstore t=2025-06-16T07:45:41.780152869Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.781712505Z level=info msg="Locking database" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.781727205Z level=info msg="Starting DB migrations" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.782360221Z level=info msg="Executing migration" id="create migration_log table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.783202129Z level=info msg="Migration successfully executed" id="create migration_log table" duration=842.778µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.789634714Z level=info msg="Executing migration" id="create user table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.791126029Z level=info msg="Migration successfully executed" id="create user table" duration=1.493045ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.795315431Z level=info msg="Executing migration" id="add unique index user.login" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.796242971Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=926.92µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.80211601Z level=info msg="Executing migration" id="add unique index user.email" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.802972798Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=857.368µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.807505824Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.80814907Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=642.896µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.815426243Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.816449043Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.01814ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.82102923Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.824763257Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.731397ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.837398664Z level=info msg="Executing migration" id="create user table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.838552736Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.157582ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.843149152Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.84392478Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=775.568µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.848568946Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.849314754Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=745.358µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.855824669Z level=info msg="Executing migration" id="copy data_source v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.856459976Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=634.917µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.861442376Z level=info msg="Executing migration" id="Drop old table user_v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.862307574Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=865.908µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.868584337Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.869697489Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.112662ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.878049623Z level=info msg="Executing migration" id="Update user table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.878084953Z level=info msg="Migration successfully executed" id="Update user table charset" duration=36.61µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.882548788Z level=info msg="Executing migration" id="Add last_seen_at column to user" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.884426737Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.877249ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.888815021Z level=info msg="Executing migration" id="Add missing user data" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.889174364Z level=info msg="Migration successfully executed" id="Add missing user data" duration=359.493µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.893450287Z level=info msg="Executing migration" id="Add is_disabled column to user" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.895382407Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.93063ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.902335707Z level=info msg="Executing migration" id="Add index user.login/user.email" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.903091824Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=755.267µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.906308927Z level=info msg="Executing migration" id="Add is_service_account column to user" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.908025604Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.714537ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.912043834Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.922740112Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.695758ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.926436829Z level=info msg="Executing migration" id="Add uid column to user" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.927741662Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.301083ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.935282138Z level=info msg="Executing migration" id="Update uid column values for users" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.935589221Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=307.483µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.940876044Z level=info msg="Executing migration" id="Add unique index user_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.942151957Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.275163ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.946576951Z level=info msg="Executing migration" id="Add is_provisioned column to user" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.947802694Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.224813ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.952034306Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.952458491Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=423.614µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.959469801Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.960076057Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=605.546µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.96439193Z level=info msg="Executing migration" id="update login and email fields to lowercase" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.965241139Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=847.589µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.970974977Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.971620883Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=644.796µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.978354161Z level=info msg="Executing migration" id="create temp user table v1-7" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.97925916Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=904.319µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.983543353Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.984381741Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=838.158µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.988015378Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.988926627Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=906.539µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.993595964Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:41.994916397Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.318253ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.002232971Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.003622125Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.389704ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.008110249Z level=info msg="Executing migration" id="Update temp_user table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.00818539Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=77.931µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.012461552Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.01325757Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=795.938µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.019760834Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.021209768Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.446474ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.025903964Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.026648442Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=744.278µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.029768022Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.03051991Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=754.758µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.048137243Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.053172263Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.03494ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.057114132Z level=info msg="Executing migration" id="create temp_user v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.058632206Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.517294ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.063531345Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.064303332Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=771.727µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.070767006Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.072025738Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.258122ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.077117449Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.078414682Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.296602ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.082500442Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.083265229Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=764.207µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.089650032Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.09044469Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=793.438µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.095072725Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.096010525Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=936.23µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.100786662Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.101243006Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=456.044µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.104553809Z level=info msg="Executing migration" id="create star table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.105299266Z level=info msg="Migration successfully executed" id="create star table" duration=746.057µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.111865251Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.113084733Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.219022ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.116752339Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.118994221Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.240382ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.123893829Z level=info msg="Executing migration" id="Add column org_id in star" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.125417184Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.521865ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.129871148Z level=info msg="Executing migration" id="Add column updated in star" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.131319202Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.447304ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.137726645Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.138540184Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=813.249µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.14322427Z level=info msg="Executing migration" id="create org table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.144527742Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.302432ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.149024547Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.15034144Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.316093ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.154903355Z level=info msg="Executing migration" id="create org_user table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.156114206Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.210011ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.16257838Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.163419688Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=841.038µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.167993444Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.169433838Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.440024ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.173166305Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.174514308Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.350963ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.178027972Z level=info msg="Executing migration" id="Update org table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.178053453Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.261µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.183211603Z level=info msg="Executing migration" id="Update org_user table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.183237844Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=26.881µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.186836869Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.187252093Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=410.514µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.190689217Z level=info msg="Executing migration" id="create dashboard table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.19200346Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.313313ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.196322172Z level=info msg="Executing migration" id="add index dashboard.account_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.197321142Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=998.31µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.203985758Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.205330861Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.344313ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.210075178Z level=info msg="Executing migration" id="create dashboard_tag table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.211010467Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=935.529µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.215222589Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.216087057Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=864.308µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.222908264Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.223648062Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=739.628µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.228284307Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.237169835Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.886108ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.258904899Z level=info msg="Executing migration" id="create dashboard v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.260437054Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.533565ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.26917967Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.270763256Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.581636ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.275548993Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.277066108Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.515795ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.281588252Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.281976276Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=387.444µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.427339978Z level=info msg="Executing migration" id="drop table dashboard_v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.428960484Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.622796ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.50876653Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.5088268Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=64.471µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.549452841Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.552799973Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.348563ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.556770463Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.559827432Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.05695ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.563036914Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.565016164Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.97851ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.570043473Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.571081803Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.03718ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.575009832Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.577000012Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.99032ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.581774619Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.5828717Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.097291ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.58698951Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.587894079Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=904.019µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.592191451Z level=info msg="Executing migration" id="Update dashboard table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.592221971Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=31.17µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.596196381Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.596223021Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=27.57µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.599259871Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.602610234Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.349283ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.606951957Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.609166979Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.216321ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.613785044Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.616865004Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.0793ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.620947325Z level=info msg="Executing migration" id="Add column uid in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.622865443Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.917368ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.625776612Z level=info msg="Executing migration" id="Update uid column values in dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.625990254Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=210.512µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.628199456Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.629093045Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=896.059µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.634809171Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.635486828Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=675.687µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.638501727Z level=info msg="Executing migration" id="Update dashboard title length" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.638542538Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=39.631µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.645949431Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.647709028Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.759357ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.651206143Z level=info msg="Executing migration" id="create dashboard_provisioning" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.65190442Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=698.457µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.658328843Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.663966828Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.637325ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.66717035Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.667953777Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=783.227µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.671329091Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.67222902Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=900.009µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.677904055Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.678992906Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.088021ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.682474251Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.683034486Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=560.295µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.685950715Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.686943154Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=993.839µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.716656067Z level=info msg="Executing migration" id="Add check_sum column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.721063391Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.407873ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.725945999Z level=info msg="Executing migration" id="Add index for dashboard_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.727194091Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.247822ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.73418718Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.734447942Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=262.532µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.740028337Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.740236679Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=208.982µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.743398901Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.744222499Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=833.089µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.747593622Z level=info msg="Executing migration" id="Add isPublic for dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.749796053Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.202191ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.753128776Z level=info msg="Executing migration" id="Add deleted for dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.75656443Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.425884ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.76260491Z level=info msg="Executing migration" id="Add index for deleted" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.763379387Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=774.587µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.767555239Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.771273395Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.716456ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.77483097Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.778500136Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=3.668576ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.784103961Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.784511695Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=407.694µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.787405494Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.791012459Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.605195ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.794218191Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.795527474Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.308563ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.801958307Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.802368561Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=410.134µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.804702104Z level=info msg="Executing migration" id="create data_source table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.805519562Z level=info msg="Migration successfully executed" id="create data_source table" duration=817.618µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.808484952Z level=info msg="Executing migration" id="add index data_source.account_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.809794464Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.309092ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.818094266Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.819574711Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.482105ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.822922844Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.824039755Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.116481ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.827328687Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.828022074Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=693.207µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.832378737Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.841558607Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.17804ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.845464766Z level=info msg="Executing migration" id="create data_source table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.847048871Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.596676ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.853723557Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.855092351Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.368754ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.859013159Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.860399773Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.386264ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.86621186Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.867762355Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.545675ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.871568733Z level=info msg="Executing migration" id="Add column with_credentials" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.874097398Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.527805ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.878917295Z level=info msg="Executing migration" id="Add secure json data column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.88144792Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.529925ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.88446523Z level=info msg="Executing migration" id="Update data_source table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.884632021Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=163.251µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.88755107Z level=info msg="Executing migration" id="Update initial version to 1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.887860483Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=308.863µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.891062495Z level=info msg="Executing migration" id="Add read_only data column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.89366733Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.603845ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.898230215Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.898532658Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=302.003µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.903097883Z level=info msg="Executing migration" id="Update json_data with nulls" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.903547658Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=449.085µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.908222244Z level=info msg="Executing migration" id="Add uid column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.912348425Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.12286ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.920834438Z level=info msg="Executing migration" id="Update uid value" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.921108861Z level=info msg="Migration successfully executed" id="Update uid value" duration=273.843µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.927011039Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.928536594Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.525165ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.99215928Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.993702916Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.543166ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:42.997524193Z level=info msg="Executing migration" id="Add is_prunable column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.001002818Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.476525ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.005422921Z level=info msg="Executing migration" id="Add api_version column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.007856436Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.432715ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.012138488Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.012156739Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=18.901µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.01634401Z level=info msg="Executing migration" id="create api_key table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.01731717Z level=info msg="Migration successfully executed" id="create api_key table" duration=971.52µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.021630273Z level=info msg="Executing migration" id="add index api_key.account_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.022851575Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.220492ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.026480021Z level=info msg="Executing migration" id="add index api_key.key" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.027218018Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=740.807µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.032704422Z level=info msg="Executing migration" id="add index api_key.account_id_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.033517691Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=812.769µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.037133917Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.038281778Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.150391ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.041870274Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.042988075Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.118601ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.047631521Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.048416919Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=785.128µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.051779523Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.05855126Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.771057ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.063112125Z level=info msg="Executing migration" id="create api_key table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.06362183Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=509.475µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.066774111Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.067345557Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=571.156µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.07166817Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.072998044Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.328543ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.077485118Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.07869752Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.211122ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.082493498Z level=info msg="Executing migration" id="copy api_key v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.082976953Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=486.585µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.086328296Z level=info msg="Executing migration" id="Drop old table api_key_v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.086860591Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=532.255µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.092192024Z level=info msg="Executing migration" id="Update api_key table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.092216925Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.001µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.09572439Z level=info msg="Executing migration" id="Add expires to api_key table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.099848401Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.123341ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.103885961Z level=info msg="Executing migration" id="Add service account foreign key" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.106455076Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.568265ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.110388545Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.110544617Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=155.462µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.125791649Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.129826549Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.03556ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.134395874Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.13703923Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.642626ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.140434314Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.141137521Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=703.897µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.146606346Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.147432244Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=824.028µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.175685925Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.177040258Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.357383ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.180537503Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.181993098Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.454545ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.185488902Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.186420792Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=932.49µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.192072228Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.194554533Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=2.481345ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.199960486Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.199998847Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=40.601µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.203830365Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.203927026Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=41.56µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.20740026Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.210445081Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.045251ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.216090117Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.218993736Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.905209ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.222240598Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.222256478Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=16.82µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.224943825Z level=info msg="Executing migration" id="create quota table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.225753333Z level=info msg="Migration successfully executed" id="create quota table v1" duration=804.448µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.230985515Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.232678332Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.692477ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.236144236Z level=info msg="Executing migration" id="Update quota table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.236345398Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=202.092µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.240409259Z level=info msg="Executing migration" id="create plugin_setting table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.241767872Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.358383ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.24659212Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.24752242Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=927.07µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.250764112Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.253848602Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.08413ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.257441398Z level=info msg="Executing migration" id="Update plugin_setting table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.257540389Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=98.161µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.26261332Z level=info msg="Executing migration" id="update NULL org_id to 1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.263164865Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=549.655µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.266805641Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.283029583Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=16.199712ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.286520998Z level=info msg="Executing migration" id="create session table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.287321086Z level=info msg="Migration successfully executed" id="create session table" duration=800.567µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.292384636Z level=info msg="Executing migration" id="Drop old table playlist table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.292463197Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=78.861µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.295547137Z level=info msg="Executing migration" id="Drop old table playlist_item table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.295618738Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=71.891µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.300134263Z level=info msg="Executing migration" id="create playlist table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.300791279Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=657.096µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.306092532Z level=info msg="Executing migration" id="create playlist item table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.306697878Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=604.566µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.311533256Z level=info msg="Executing migration" id="Update playlist table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.311729878Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=196.222µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.318382095Z level=info msg="Executing migration" id="Update playlist_item table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.318583947Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=200.823µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.364725426Z level=info msg="Executing migration" id="Add playlist column created_at" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.369973638Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.247252ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.469861791Z level=info msg="Executing migration" id="Add playlist column updated_at" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.47573684Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=5.876649ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.481312555Z level=info msg="Executing migration" id="drop preferences table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.481516327Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=203.172µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.48476191Z level=info msg="Executing migration" id="drop preferences table v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.484985422Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=223.562µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.488344295Z level=info msg="Executing migration" id="create preferences table v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.489548817Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.202692ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.500944641Z level=info msg="Executing migration" id="Update preferences table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.501050352Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=83.48µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.504346984Z level=info msg="Executing migration" id="Add column team_id in preferences" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.507708448Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.360584ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.510649617Z level=info msg="Executing migration" id="Update team_id column values in preferences" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.51090532Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=255.563µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.515408145Z level=info msg="Executing migration" id="Add column week_start in preferences" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.518767038Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.358213ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.523579636Z level=info msg="Executing migration" id="Add column preferences.json_data" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.528527245Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.946459ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.53403451Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.53407837Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=46.6µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.536980749Z level=info msg="Executing migration" id="Add preferences index org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.538516414Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.534945ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.542243221Z level=info msg="Executing migration" id="Add preferences index user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.543874258Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.631827ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.568707894Z level=info msg="Executing migration" id="create alert table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.569965757Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.256593ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.573217069Z level=info msg="Executing migration" id="add index alert org_id & id " 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.574512112Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.294683ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.577593583Z level=info msg="Executing migration" id="add index alert state" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.578537202Z level=info msg="Migration successfully executed" id="add index alert state" duration=928.159µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.583052527Z level=info msg="Executing migration" id="add index alert dashboard_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.584049417Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=996.46µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.587654133Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.588462591Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=807.788µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.591639323Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.600627672Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=8.984609ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.604310189Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.605605452Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.293643ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.608831834Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.620146416Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.312152ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.628051505Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.628745412Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=693.877µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.634037105Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.635186236Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.150821ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.669842051Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.670455967Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=616.516µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.675185714Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.676085493Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=899.859µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.681524177Z level=info msg="Executing migration" id="create alert_notification table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.682772519Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.248702ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.688021772Z level=info msg="Executing migration" id="Add column is_default" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.693607007Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.585725ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.698743638Z level=info msg="Executing migration" id="Add column frequency" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.702703258Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.935939ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.706231593Z level=info msg="Executing migration" id="Add column send_reminder" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.710269443Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.03692ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.714778978Z level=info msg="Executing migration" id="Add column disable_resolve_message" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.719660556Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.880018ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.724561915Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.725666196Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.103831ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.729451474Z level=info msg="Executing migration" id="Update alert table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.729479004Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.49µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.733803237Z level=info msg="Executing migration" id="Update alert_notification table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.733836737Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=34.28µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.736901108Z level=info msg="Executing migration" id="create notification_journal table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.737696296Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=795.048µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.743931588Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.744876217Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=944.039µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.749732456Z level=info msg="Executing migration" id="drop alert_notification_journal" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.750631024Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=898.579µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.755802436Z level=info msg="Executing migration" id="create alert_notification_state table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.756843626Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.04062ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.760504633Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.762061368Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.556935ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.775865045Z level=info msg="Executing migration" id="Add for to alert table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.779942686Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.076991ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.784641953Z level=info msg="Executing migration" id="Add column uid in alert_notification" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.791843515Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=7.200291ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.796578641Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.796732883Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=154.192µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.801087807Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.801803784Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=714.938µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.80850281Z level=info msg="Executing migration" id="Remove unique index org_id_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.809389489Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=886.509µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.840907792Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.847286176Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.376754ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.851033293Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.851049933Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=20.96µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.855447417Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.856347106Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=899.539µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.86090127Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.861804699Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=905.609µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.865058612Z level=info msg="Executing migration" id="Drop old annotation table v4" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.865167013Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=108.481µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.871505616Z level=info msg="Executing migration" id="create annotation table v5" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.872485376Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=979.5µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.876870599Z level=info msg="Executing migration" id="add index annotation 0 v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.877828629Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=957.78µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.881348284Z level=info msg="Executing migration" id="add index annotation 1 v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.882283753Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=931.839µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.886939779Z level=info msg="Executing migration" id="add index annotation 2 v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.887869308Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=929.239µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.891203462Z level=info msg="Executing migration" id="add index annotation 3 v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.892226552Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.02249ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.896178781Z level=info msg="Executing migration" id="add index annotation 4 v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.897122811Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=943.66µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.901299052Z level=info msg="Executing migration" id="Update annotation table charset" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.901325893Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.491µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.904414373Z level=info msg="Executing migration" id="Add column region_id to annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.908480674Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.065111ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.913595945Z level=info msg="Executing migration" id="Drop category_id index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.914465963Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=872.388µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.918844887Z level=info msg="Executing migration" id="Add column tags to annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.922725445Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.880328ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.926069759Z level=info msg="Executing migration" id="Create annotation_tag table v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.926760205Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=686.706µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.930089039Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.931036398Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=946.879µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.935832286Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.936726895Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=894.009µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.939884606Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.955480881Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.596375ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.958742183Z level=info msg="Executing migration" id="Create annotation_tag table v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.959239998Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=497.425µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.965088757Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:43.965766903Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=678.546µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.016523529Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.017015974Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=493.335µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.020089865Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.021044004Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=953.579µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.025865783Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.026202566Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=337.553µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.029799772Z level=info msg="Executing migration" id="Add created time to annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.033976693Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.176261ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.037491019Z level=info msg="Executing migration" id="Add updated time to annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.041836092Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.343223ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.047027264Z level=info msg="Executing migration" id="Add index for created in annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.048488389Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.460255ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.051790232Z level=info msg="Executing migration" id="Add index for updated in annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.053016924Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.225192ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.057849682Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.058090015Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=238.863µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.063538799Z level=info msg="Executing migration" id="Add epoch_end column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.071223056Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.683347ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.075208186Z level=info msg="Executing migration" id="Add index for epoch_end" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.075893723Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=685.067µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.079262967Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.079392248Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=129.101µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.082630931Z level=info msg="Executing migration" id="Move region to single row" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.083059125Z level=info msg="Migration successfully executed" id="Move region to single row" duration=427.704µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.088135996Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.089186326Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.0489ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.095069525Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.096728972Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.655207ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.101405859Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.102979434Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.573125ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.108440719Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.109325278Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=883.999µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.114097566Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.114981614Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=883.698µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.19950967Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.201543851Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=2.034201ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.28541572Z level=info msg="Executing migration" id="Increase tags column to length 4096" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.285493091Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=77.701µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.348030847Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.348097807Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=68.53µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.409268139Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.40929777Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=30.931µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.450839836Z level=info msg="Executing migration" id="create test_data table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.452026948Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.187122ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.469480842Z level=info msg="Executing migration" id="create dashboard_version table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.470647024Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.165682ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.47622786Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.477423432Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.197822ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.482311851Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.48327125Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=959.339µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.490670825Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.490903867Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=236.163µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.494791635Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.495183729Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=391.694µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.499184069Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.499204659Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=37.57µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.502489442Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.508364821Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.874899ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.512850796Z level=info msg="Executing migration" id="create team table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.513735695Z level=info msg="Migration successfully executed" id="create team table" duration=885.049µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.521138139Z level=info msg="Executing migration" id="add index team.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.523108609Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.96933ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.529684625Z level=info msg="Executing migration" id="add unique index team_org_id_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.530621314Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=936.659µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.533856586Z level=info msg="Executing migration" id="Add column uid in team" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.538605144Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.748218ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.542877837Z level=info msg="Executing migration" id="Update uid column values in team" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.543054489Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=178.632µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.546075489Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.546956658Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=880.899µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.5542304Z level=info msg="Executing migration" id="Add column external_uid in team" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.559461763Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.233773ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.562482293Z level=info msg="Executing migration" id="Add column is_provisioned in team" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.565739236Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=3.256863ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.57018035Z level=info msg="Executing migration" id="create team member table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.571008899Z level=info msg="Migration successfully executed" id="create team member table" duration=828.018µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.574810837Z level=info msg="Executing migration" id="add index team_member.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.575849937Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.03857ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.579011859Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.580019939Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.00789ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.586746976Z level=info msg="Executing migration" id="add index team_member.team_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.588781916Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=2.03602ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.595948128Z level=info msg="Executing migration" id="Add column email to team table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.600964168Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.01857ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.606492753Z level=info msg="Executing migration" id="Add column external to team_member table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.609977908Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.482205ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.613057299Z level=info msg="Executing migration" id="Add column permission to team_member table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.617799607Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.742178ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.623226121Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.6241565Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=930.059µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.630865107Z level=info msg="Executing migration" id="create dashboard acl table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.632065499Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.202992ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.635760446Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.63714329Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.365774ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.664380353Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.665803017Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.422714ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.670388363Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.671497454Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.109201ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.674540595Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.675422074Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=881.149µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.680726417Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.681638676Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=911.509µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.688420324Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.689371153Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=950.379µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.692656676Z level=info msg="Executing migration" id="add index dashboard_permission" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.693543345Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=886.279µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.699192421Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.699669706Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=477.195µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.702825157Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.70305512Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=229.993µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.708565175Z level=info msg="Executing migration" id="create tag table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.709499314Z level=info msg="Migration successfully executed" id="create tag table" duration=933.779µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.713293022Z level=info msg="Executing migration" id="add index tag.key_value" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.714274342Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=980.75µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.718451734Z level=info msg="Executing migration" id="create login attempt table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.719387763Z level=info msg="Migration successfully executed" id="create login attempt table" duration=935.979µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.726618256Z level=info msg="Executing migration" id="add index login_attempt.username" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.727924179Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.307283ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.732210082Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.73307926Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=869.028µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.736127931Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.750643436Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.514275ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.755470265Z level=info msg="Executing migration" id="create login_attempt v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.756069381Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=598.736µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.760410134Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.761235182Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=824.958µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.764448725Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.764757188Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=308.543µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.769637417Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.770685357Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.046371ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.775201612Z level=info msg="Executing migration" id="create user auth table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.776766308Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.564676ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.783369554Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.784248083Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=878.419µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.788840798Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.788858109Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=17.69µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.792244032Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.797298833Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.053951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.800832818Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.80695047Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.117372ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.812068861Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.817292133Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.222822ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.820561086Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.82599554Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.433694ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.82997721Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.832121592Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=2.144662ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.837725728Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.843602047Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.880539ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.847168732Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.852385265Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.216013ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.855791619Z level=info msg="Executing migration" id="create server_lock table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.856798799Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.00678ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.88392912Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.886056812Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.127512ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.891512876Z level=info msg="Executing migration" id="create user auth token table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.892531496Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.01903ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.896036471Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.897031511Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=994.8µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.902535626Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.903489946Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=953.99µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.907173213Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.908160423Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=986.82µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.911774619Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.918417795Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.642606ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.923623407Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.924625978Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.002201ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.928320885Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.937044822Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.722237ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.94086683Z level=info msg="Executing migration" id="create cache_data table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.941543477Z level=info msg="Migration successfully executed" id="create cache_data table" duration=676.387µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.946686008Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.947638648Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=949.81µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.950903351Z level=info msg="Executing migration" id="create short_url table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.951772689Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=865.908µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.955173743Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.956138723Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=964.66µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.959760379Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.959773439Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=13.6µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.967621628Z level=info msg="Executing migration" id="delete alert_definition table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.967915501Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=292.523µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.972131913Z level=info msg="Executing migration" id="recreate alert_definition table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.973272395Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.140862ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.976717099Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.977692389Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=975.14µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.982985832Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.983970141Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=983.979µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.987220674Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.987232604Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=12.59µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.990585668Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.991978311Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.392063ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.997541937Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:44.999401366Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.859109ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.003452246Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.005220115Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.768149ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.008939643Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.009940413Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.00085ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.014735582Z level=info msg="Executing migration" id="Add column paused in alert_definition" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.02040744Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.671797ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.023791404Z level=info msg="Executing migration" id="drop alert_definition table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.025058707Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.252792ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.030199279Z level=info msg="Executing migration" id="delete alert_definition_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.03032978Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=131.021µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.033818246Z level=info msg="Executing migration" id="recreate alert_definition_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.035328891Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.510225ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.038995068Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.040665495Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.669707ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.045946629Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.046935709Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=988.3µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.050500435Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.050549506Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=49.591µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.05391186Z level=info msg="Executing migration" id="drop alert_definition_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.054955711Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.042991ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.059925621Z level=info msg="Executing migration" id="create alert_instance table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.060888741Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=962.58µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.064635809Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.065625159Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=986.13µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.07354883Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.07454104Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=994.35µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.095790356Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.102655896Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.86546ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.105698227Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.106568066Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=872.319µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.111115522Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.111986071Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=870.369µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.115273835Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.141499361Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.225256ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.144808145Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.172971772Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=28.163307ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.178431507Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.181033584Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=2.601277ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.184926754Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.186590601Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.658576ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.189912134Z level=info msg="Executing migration" id="add current_reason column related to current_state" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.19440998Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.497276ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.199598313Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.205793626Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.194523ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.209410353Z level=info msg="Executing migration" id="create alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.210727406Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.314643ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.214501215Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.216873609Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.372664ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.22189289Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.22291841Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.02521ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.226193654Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.227227204Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.033131ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.230380896Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.230394206Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=14.03µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.235537378Z level=info msg="Executing migration" id="add column for to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.245718192Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=10.181444ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.248728073Z level=info msg="Executing migration" id="add column annotations to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.253164348Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.435355ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.25632543Z level=info msg="Executing migration" id="add column labels to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.265330112Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.004342ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.270319033Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.272630896Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=2.311503ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.276576856Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.28086236Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=4.283794ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.284725789Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.292599929Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.87408ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.305233018Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.321481563Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=16.247495ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.326483394Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.327548365Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.065351ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.331177952Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.337929511Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.751179ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.344984063Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.350462258Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.476015ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.354614261Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.354663801Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=52.551µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.358291888Z level=info msg="Executing migration" id="create alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.359159307Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=867.499µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.363510371Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.36438999Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=878.929µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.368186019Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.369996637Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.810248ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.374374622Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.374398372Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=25.4µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.382663246Z level=info msg="Executing migration" id="add column for to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.391015361Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.350905ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.394645108Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.404374247Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.729539ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.408220296Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.417355269Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=9.134413ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.420906345Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.429602304Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=8.694979ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.434969348Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.443093941Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.118553ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.448163572Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.448184803Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=22.351µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.454502487Z level=info msg="Executing migration" id=create_alert_configuration_table 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.455550878Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.050851ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.463162635Z level=info msg="Executing migration" id="Add column default in alert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.472750763Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.588448ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.476305799Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.476342239Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=38.1µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.484161279Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.491306091Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.145722ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.498232082Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.498938939Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=706.037µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.514648369Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.527255988Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=12.607489ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.530537931Z level=info msg="Executing migration" id=create_ngalert_configuration_table 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.531212848Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=674.527µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.536801075Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.537819525Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.01774ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.54325081Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.548903878Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.654738ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.554592846Z level=info msg="Executing migration" id="create provenance_type table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.555208552Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=615.646µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.558756178Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.560407565Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.650727ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.565296115Z level=info msg="Executing migration" id="create alert_image table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.566578678Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.282293ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.570304656Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.571272826Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=950.239µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.574972153Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.575001573Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=30.97µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.579781832Z level=info msg="Executing migration" id=create_alert_configuration_history_table 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.581268677Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.486495ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.585358519Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.58742892Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.059561ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.592147188Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.592598083Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.595921116Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.596528022Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=423.055µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.600054079Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.601649325Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.594316ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.605310832Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.612849049Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.527107ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.617655358Z level=info msg="Executing migration" id="create library_element table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.618358135Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=702.557µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.621964522Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.622738979Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=773.727µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.627582219Z level=info msg="Executing migration" id="create library_element_connection table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.628982023Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.399874ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.633801372Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.634833422Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.03192ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.639006045Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.640080856Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.074281ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.643565771Z level=info msg="Executing migration" id="increase max description length to 2048" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.643595352Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.231µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.651124938Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.651152149Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=32.271µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.656515683Z level=info msg="Executing migration" id="add library_element folder uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.668286143Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=11.77203ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.67194876Z level=info msg="Executing migration" id="populate library_element folder_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.672378144Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=427.384µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.677091813Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.678965972Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.870249ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.682651299Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.683235885Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=584.236µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.686857952Z level=info msg="Executing migration" id="create data_keys table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.688006234Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.147702ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.69163895Z level=info msg="Executing migration" id="create secrets table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.692688711Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.049091ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.697031186Z level=info msg="Executing migration" id="rename data_keys name column to id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.734215524Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=37.184228ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.738838771Z level=info msg="Executing migration" id="add name column into data_keys" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.744244936Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.405865ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.747926064Z level=info msg="Executing migration" id="copy data_keys id column values into name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.748188436Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=261.632µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.752960595Z level=info msg="Executing migration" id="rename data_keys name column to label" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.788386495Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.42866ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.791927571Z level=info msg="Executing migration" id="rename data_keys id column back to name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.822081178Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.152617ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.825760396Z level=info msg="Executing migration" id="create kv_store table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.826647765Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=886.889µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.831481584Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.833351653Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.869799ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.837871789Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.838446335Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=574.306µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.843989601Z level=info msg="Executing migration" id="create permission table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.844987651Z level=info msg="Migration successfully executed" id="create permission table" duration=998.31µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.850009602Z level=info msg="Executing migration" id="add unique index permission.role_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.851881911Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.862539ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.856154975Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.858048884Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.894089ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.861868643Z level=info msg="Executing migration" id="create role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.862867233Z level=info msg="Migration successfully executed" id="create role table" duration=998.12µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.86748482Z level=info msg="Executing migration" id="add column display_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.874978677Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.493507ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.880169469Z level=info msg="Executing migration" id="add column group_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.885715576Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.545677ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.890452204Z level=info msg="Executing migration" id="add index role.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.891688697Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.236543ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.896722478Z level=info msg="Executing migration" id="add unique index role_org_id_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.897861239Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.138581ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.901430806Z level=info msg="Executing migration" id="add index role_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.902643778Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.213482ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.907260025Z level=info msg="Executing migration" id="create team role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.911552389Z level=info msg="Migration successfully executed" id="create team role table" duration=4.291874ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.941290592Z level=info msg="Executing migration" id="add index team_role.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.945271512Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=3.98147ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.949463305Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.950609816Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.146112ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.954134832Z level=info msg="Executing migration" id="add index team_role.team_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.955264154Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.128902ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.961267975Z level=info msg="Executing migration" id="create user role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.962320386Z level=info msg="Migration successfully executed" id="create user role table" duration=1.051921ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.966132864Z level=info msg="Executing migration" id="add index user_role.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.967311326Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.181632ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.970806842Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.972088835Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.281763ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.976631991Z level=info msg="Executing migration" id="add index user_role.user_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.978258988Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.623897ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.984245759Z level=info msg="Executing migration" id="create builtin role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.985424781Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.178332ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.990003157Z level=info msg="Executing migration" id="add index builtin_role.role_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.991982948Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.981101ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.996038429Z level=info msg="Executing migration" id="add index builtin_role.name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:45.998836767Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.796958ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.003930599Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.01371098Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.779951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.017742151Z level=info msg="Executing migration" id="add index builtin_role.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.018940653Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.198302ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.02253454Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.023757553Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.222733ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.02827076Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.029813895Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.541815ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.033839897Z level=info msg="Executing migration" id="add unique index role.uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.035736386Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.896049ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.04090706Z level=info msg="Executing migration" id="create seed assignment table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.04187607Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=965.489µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.047042292Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.048983272Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.94025ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.053368948Z level=info msg="Executing migration" id="add column hidden to role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.062050587Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.682349ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.066644684Z level=info msg="Executing migration" id="permission kind migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.072934489Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.288665ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.077529276Z level=info msg="Executing migration" id="permission attribute migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.08570176Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.170354ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.090300818Z level=info msg="Executing migration" id="permission identifier migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.098787235Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.484967ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.102731156Z level=info msg="Executing migration" id="add permission identifier index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.103956528Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.225292ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.120702401Z level=info msg="Executing migration" id="add permission action scope role_id index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.121773462Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.070951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.15076726Z level=info msg="Executing migration" id="remove permission role_id action scope index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.151899522Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.130872ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.157611521Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.169808226Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=12.197005ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.174522585Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.175757997Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.234992ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.179449926Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.182197944Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=2.745968ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.187878262Z level=info msg="Executing migration" id="create query_history table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.188962523Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.083951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.192723052Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.193980875Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.257483ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.197581042Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.197701173Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=117.701µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.204087699Z level=info msg="Executing migration" id="create query_history_details table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.20519129Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.102951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.210277523Z level=info msg="Executing migration" id="rbac disabled migrator" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.210515775Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=234.672µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.214218843Z level=info msg="Executing migration" id="teams permissions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.215098032Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=876.479µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.226096986Z level=info msg="Executing migration" id="dashboard permissions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.227051935Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=956.6µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.231789394Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.232460461Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=671.067µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.235496232Z level=info msg="Executing migration" id="drop managed folder create actions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.235709384Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=211.932µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.238796396Z level=info msg="Executing migration" id="alerting notification permissions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.239306482Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=509.736µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.242428144Z level=info msg="Executing migration" id="create query_history_star table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.243415394Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=986.78µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.248233893Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.249490516Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.259203ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.252584108Z level=info msg="Executing migration" id="add column org_id in query_history_star" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.261868574Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.266915ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.26543425Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.265458391Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=25.141µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.268744744Z level=info msg="Executing migration" id="create correlation table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.269835596Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.090262ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.274700776Z level=info msg="Executing migration" id="add index correlations.uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.275902958Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.202062ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.2790381Z level=info msg="Executing migration" id="add index correlations.source_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.280301833Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.260683ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.28389418Z level=info msg="Executing migration" id="add correlation config column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.292793512Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.898142ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.297420599Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.298169897Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=749.758µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.301032847Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.301889806Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=855.369µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.309420863Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.334323009Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.901576ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.338020267Z level=info msg="Executing migration" id="create correlation v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.338799375Z level=info msg="Migration successfully executed" id="create correlation v2" duration=778.548µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.343776086Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.345656596Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.87823ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.350328204Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.352532867Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.204223ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.368941716Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.370150248Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.207842ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.374940787Z level=info msg="Executing migration" id="copy correlation v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.37516958Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=228.793µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.37811312Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.378842858Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=729.498µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.382698377Z level=info msg="Executing migration" id="add provisioning column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.391118604Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.419637ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.39755514Z level=info msg="Executing migration" id="add type column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.406691714Z level=info msg="Migration successfully executed" id="add type column" duration=9.136724ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.410361172Z level=info msg="Executing migration" id="create entity_events table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.411065009Z level=info msg="Migration successfully executed" id="create entity_events table" duration=708.277µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.419235643Z level=info msg="Executing migration" id="create dashboard public config v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.421082472Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.845979ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.425030673Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.425509418Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.428551579Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.429073035Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.433839554Z level=info msg="Executing migration" id="Drop old dashboard public config table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.435622122Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.784748ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.441367041Z level=info msg="Executing migration" id="recreate dashboard public config v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.442688245Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.320394ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.447645276Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.448738577Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.093341ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.45584734Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.456928141Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.079951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.462541689Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.46358012Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.037761ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.466858314Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.467863324Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.00432ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.47523157Z level=info msg="Executing migration" id="Drop public config table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.476100769Z level=info msg="Migration successfully executed" id="Drop public config table" duration=869.309µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.479560514Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.481609955Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.048691ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.486849529Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.48886763Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.019621ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.492847001Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.49374777Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=896.159µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.498179446Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.499203016Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.0232ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.506727084Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.533715782Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.986007ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.538052356Z level=info msg="Executing migration" id="add annotations_enabled column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.547402333Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.345686ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.552075601Z level=info msg="Executing migration" id="add time_selection_enabled column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.558638998Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.563207ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.583900068Z level=info msg="Executing migration" id="delete orphaned public dashboards" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.584320533Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=420.915µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.589327784Z level=info msg="Executing migration" id="add share column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.598013333Z level=info msg="Migration successfully executed" id="add share column" duration=8.684649ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.602450309Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.602639511Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=188.922µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.606280109Z level=info msg="Executing migration" id="create file table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.607322099Z level=info msg="Migration successfully executed" id="create file table" duration=1.04202ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.612431192Z level=info msg="Executing migration" id="file table idx: path natural pk" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.614456103Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.99053ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.62390017Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.625129252Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.231282ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.628216894Z level=info msg="Executing migration" id="create file_meta table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.628850581Z level=info msg="Migration successfully executed" id="create file_meta table" duration=633.777µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.631623419Z level=info msg="Executing migration" id="file table idx: path key" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.632447398Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=823.689µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.637231667Z level=info msg="Executing migration" id="set path collation in file table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.637251837Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=21.04µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.640567762Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.640598732Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=33.88µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.644444951Z level=info msg="Executing migration" id="managed permissions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.644944566Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=499.205µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.649253751Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.649407122Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=153.371µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.652717817Z level=info msg="Executing migration" id="RBAC action name migrator" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.653720257Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=998.68µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.657686508Z level=info msg="Executing migration" id="Add UID column to playlist" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.666957043Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.269075ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.670885243Z level=info msg="Executing migration" id="Update uid column values in playlist" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.671287827Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=404.194µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.675979606Z level=info msg="Executing migration" id="Add index for uid in playlist" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.677492022Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.512446ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.680892656Z level=info msg="Executing migration" id="update group index for alert rules" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.681302341Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=410.215µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.684394622Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.684667755Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=272.803µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.687617876Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.688168851Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=549.945µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.692233953Z level=info msg="Executing migration" id="add action column to seed_assignment" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.703347408Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.103394ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.706655721Z level=info msg="Executing migration" id="add scope column to seed_assignment" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.713555163Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.894822ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.717847027Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.718897087Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.0496ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.722826198Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.803766211Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=80.934733ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.808608531Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.809887004Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.280013ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.814290009Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.815550613Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.260014ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.818833516Z level=info msg="Executing migration" id="add primary key to seed_assigment" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.847833495Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.986828ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.853923167Z level=info msg="Executing migration" id="add origin column to seed_assignment" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.861356284Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.432257ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.864675448Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.865037992Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=362.114µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.868049853Z level=info msg="Executing migration" id="prevent seeding OnCall access" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.868287335Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=237.002µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.872940863Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.873286527Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=347.724µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.877807873Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.877963205Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=155.142µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.88045766Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.880751173Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=293.213µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.883641773Z level=info msg="Executing migration" id="create folder table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.884581713Z level=info msg="Migration successfully executed" id="create folder table" duration=939.74µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.887388082Z level=info msg="Executing migration" id="Add index for parent_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.888491763Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.099951ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.892917689Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.89404709Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.129691ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.89888111Z level=info msg="Executing migration" id="Update folder title length" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.89890264Z level=info msg="Migration successfully executed" id="Update folder title length" duration=25.2µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.903094183Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.904009493Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=911.91µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.911154896Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.912338759Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.183583ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.915791994Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.917161118Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.367594ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.92218591Z level=info msg="Executing migration" id="Sync dashboard and folder table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.922888597Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=702.477µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.930422185Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.930712798Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=290.403µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.935128153Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.936278905Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.150562ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.941442608Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.94262914Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.188472ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.945703272Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.946881274Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.169162ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.950075587Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.951527042Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.451755ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.956615295Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.957860457Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.249113ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.961118961Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.962329193Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.209942ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.96781455Z level=info msg="Executing migration" id="create anon_device table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.968983342Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.168892ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.972535448Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.97372304Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.187412ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.977711601Z level=info msg="Executing migration" id="add index anon_device.updated_at" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.978840173Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.128302ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.983204258Z level=info msg="Executing migration" id="create signing_key table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.984063117Z level=info msg="Migration successfully executed" id="create signing_key table" duration=858.169µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.987660254Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.988795485Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.135121ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.994667646Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:46.99604757Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.379844ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.021433916Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.021970482Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=538.356µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.02749562Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.034903697Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.407037ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.038120931Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.038853089Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=732.978µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.042872671Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.042893571Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=18.93µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.046012224Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.047155926Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.143592ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.051533462Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.051551852Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.8µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.054739695Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.056023469Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.283224ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.059351654Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.061067002Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.713828ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.066786762Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.068666272Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.878939ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.072024307Z level=info msg="Executing migration" id="create sso_setting table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.073182019Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.153682ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.076620165Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.077675846Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.058611ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.080945671Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.081263494Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=319.683µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.084417387Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.085139825Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=721.958µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.089507341Z level=info msg="Executing migration" id="create cloud_migration table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.090560632Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.052671ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.093709655Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.094725796Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.016681ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.098180632Z level=info msg="Executing migration" id="add stack_id column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.109262708Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.081166ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.113518183Z level=info msg="Executing migration" id="add region_slug column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.123856932Z level=info msg="Migration successfully executed" id="add region_slug column" duration=10.337709ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.12753215Z level=info msg="Executing migration" id="add cluster_slug column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.134580905Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.047665ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.13796683Z level=info msg="Executing migration" id="add migration uid column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.148057106Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.089726ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.152476633Z level=info msg="Executing migration" id="Update uid column values for migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.152674095Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=196.592µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.156397504Z level=info msg="Executing migration" id="Add unique index migration_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.157573067Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.175232ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.160854991Z level=info msg="Executing migration" id="add migration run uid column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.170467782Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.615121ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.175335753Z level=info msg="Executing migration" id="Update uid column values for migration run" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.175603446Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=269.813µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.178997162Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.180027573Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.030651ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.18451863Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.205152387Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=20.633507ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.209121308Z level=info msg="Executing migration" id="create cloud_migration_session v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.209861746Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=740.788µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.233849348Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.235845079Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.995531ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.240873622Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.241270306Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=395.844µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.244984215Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.245884705Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=899.98µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.249684895Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.277419506Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=27.735581ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.284300118Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.285078397Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=778.229µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.288663925Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.289860667Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.196252ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.293433645Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.293813259Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=379.274µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.298242945Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.299115674Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=872.229µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.302854973Z level=info msg="Executing migration" id="add snapshot upload_url column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.314061501Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.205678ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.317880242Z level=info msg="Executing migration" id="add snapshot status column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.325049727Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.169165ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.330609485Z level=info msg="Executing migration" id="add snapshot local_directory column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.342013425Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=11.4033ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.345631363Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.352578166Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.944113ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.356064243Z level=info msg="Executing migration" id="add snapshot encryption_key column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.367017478Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=10.952705ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.37194523Z level=info msg="Executing migration" id="add snapshot error_string column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.385993917Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=14.049167ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.389535215Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.390283712Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=747.797µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.393614248Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.43097558Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.360762ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.483683814Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.497367627Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=13.686753ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.501371619Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.515985403Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=14.614204ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.519778123Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.530075261Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.325479ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.535844422Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.54518822Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.341088ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.548876089Z level=info msg="Executing migration" id="increase resource_uid column length" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.54893601Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=60.351µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.553867891Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.553948912Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=81.291µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.558824693Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.569261753Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.43657ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.573802881Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.580999486Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.195635ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.585041249Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.585587555Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=545.576µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.58988536Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.590218034Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=332.004µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.596400088Z level=info msg="Executing migration" id="add record column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.60610867Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=9.708542ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.610737879Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.621774585Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=11.070176ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.625172811Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.632326036Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.154305ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.637228468Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.649004021Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=11.775643ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.653768662Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.655178367Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=1.409274ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.660009667Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.66977175Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.761443ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.673224746Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.680407962Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=7.182116ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.686579336Z level=info msg="Executing migration" id="delete orphaned service account permissions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.68691035Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=333.484µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.690592518Z level=info msg="Executing migration" id="adding action set permissions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.691172275Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=579.527µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.694589421Z level=info msg="Executing migration" id="create user_external_session table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.695762953Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.173052ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.700172229Z level=info msg="Executing migration" id="increase name_id column length to 1024" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.700306431Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=134.722µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.704431544Z level=info msg="Executing migration" id="increase session_id column length to 1024" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.704523395Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=92.171µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.707793229Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.708355875Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=562.366µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.711854492Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.722334882Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.47955ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.726493366Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.733395398Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=6.901472ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.736905615Z level=info msg="Executing migration" id="add alert_rule_state table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.737906156Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.000371ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.744327344Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.74775992Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=3.432106ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.752785412Z level=info msg="Executing migration" id="add guid column to alert_rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.762738477Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=9.952295ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.768562208Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.777611783Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.049065ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.782110311Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.782166122Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.782476905Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.782533915Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=422.534µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.786890251Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.787658929Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=768.388µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.791214866Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.792408519Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.193563ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.795890055Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.797179239Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.288694ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.805002552Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.806226414Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.223142ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.809439228Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.810655351Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.215523ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.815147478Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.826657049Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.509191ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.83244965Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.843495856Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=11.045886ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.848230216Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.859494064Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=11.263268ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.863112062Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.873038737Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.926125ms 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.877557894Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.877826997Z level=info msg="Removed 0 datasources:drilldown permissions" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.877917218Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=358.554µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.882289164Z level=info msg="Executing migration" id="remove title in folder unique index" 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.883180833Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=891.449µs 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.919408184Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.137078913s 07:47:34 grafana | logger=migrator t=2025-06-16T07:45:47.92087839Z level=info msg="Unlocking database" 07:47:34 grafana | logger=sqlstore t=2025-06-16T07:45:47.93903487Z level=info msg="Created default admin" user=admin 07:47:34 grafana | logger=sqlstore t=2025-06-16T07:45:47.939303553Z level=info msg="Created default organization" 07:47:34 grafana | logger=secrets t=2025-06-16T07:45:47.943885131Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 07:47:34 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T07:45:48.032384365Z level=info msg="Restored cache from database" duration=549.425µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.042934004Z level=info msg="Locking database" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.043029655Z level=info msg="Starting DB migrations" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.051117977Z level=info msg="Executing migration" id="create resource_migration_log table" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.051869975Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=750.478µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.055588824Z level=info msg="Executing migration" id="Initialize resource tables" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.055627904Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=39.52µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.061294512Z level=info msg="Executing migration" id="drop table resource" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.061454684Z level=info msg="Migration successfully executed" id="drop table resource" duration=159.962µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.066119392Z level=info msg="Executing migration" id="create table resource" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.068016601Z level=info msg="Migration successfully executed" id="create table resource" duration=1.896139ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.071960122Z level=info msg="Executing migration" id="create table resource, index: 0" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.073542748Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.584256ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.076785042Z level=info msg="Executing migration" id="drop table resource_history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.076921693Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=135.401µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.081669422Z level=info msg="Executing migration" id="create table resource_history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.082856024Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.186122ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.086092467Z level=info msg="Executing migration" id="create table resource_history, index: 0" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.087843065Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.749848ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.092732995Z level=info msg="Executing migration" id="create table resource_history, index: 1" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.094471543Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.738858ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.100237162Z level=info msg="Executing migration" id="drop table resource_version" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.100399644Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=161.762µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.104190893Z level=info msg="Executing migration" id="create table resource_version" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.105691588Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.503385ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.10971496Z level=info msg="Executing migration" id="create table resource_version, index: 0" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.113237116Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=3.521116ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.143107553Z level=info msg="Executing migration" id="drop table resource_blob" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.143408126Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=302.233µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.15060696Z level=info msg="Executing migration" id="create table resource_blob" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.152875743Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.268473ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.157100597Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.15841631Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.315863ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.164181849Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.166275151Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.093622ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.172849318Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.183964433Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.117575ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.188727852Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.197070217Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.346016ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.200766375Z level=info msg="Executing migration" id="Add index to resource_history for polling" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.202044949Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.278794ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.205194581Z level=info msg="Executing migration" id="Add index to resource for loading" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.206444294Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.249283ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.209609736Z level=info msg="Executing migration" id="Add column folder in resource_history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.22069886Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=11.088624ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.228066416Z level=info msg="Executing migration" id="Add column folder in resource" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.241413633Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=13.351927ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.244892049Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 07:47:34 grafana | logger=deletion-marker-migrator t=2025-06-16T07:45:48.244920459Z level=info msg="finding any deletion markers" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.245363364Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=470.945µs 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.250062922Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.251348635Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.284953ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.254220024Z level=info msg="Executing migration" id="Add generation to resource history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.264540061Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.318577ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.269890116Z level=info msg="Executing migration" id="Add generation index to resource history" 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.271387371Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.495425ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.275003358Z level=info msg="migrations completed" performed=26 skipped=0 duration=223.921061ms 07:47:34 grafana | logger=resource-migrator t=2025-06-16T07:45:48.275992798Z level=info msg="Unlocking database" 07:47:34 grafana | t=2025-06-16T07:45:48.276413112Z level=info caller=logger.go:214 time=2025-06-16T07:45:48.276386112Z msg="Using channel notifier" logger=sql-resource-server 07:47:34 grafana | logger=plugin.store t=2025-06-16T07:45:48.28982128Z level=info msg="Loading plugins..." 07:47:34 grafana | logger=plugins.registration t=2025-06-16T07:45:48.332685371Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 07:47:34 grafana | logger=plugins.initialization t=2025-06-16T07:45:48.332720041Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 07:47:34 grafana | logger=plugin.store t=2025-06-16T07:45:48.332941613Z level=info msg="Plugins loaded" count=53 duration=43.121273ms 07:47:34 grafana | logger=query_data t=2025-06-16T07:45:48.338518101Z level=info msg="Query Service initialization" 07:47:34 grafana | logger=live.push_http t=2025-06-16T07:45:48.343971147Z level=info msg="Live Push Gateway initialization" 07:47:34 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T07:45:48.36373006Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 07:47:34 grafana | logger=ngalert t=2025-06-16T07:45:48.374343859Z level=info msg="Using simple database alert instance store" 07:47:34 grafana | logger=ngalert.state.manager.persist t=2025-06-16T07:45:48.374526091Z level=info msg="Using sync state persister" 07:47:34 grafana | logger=infra.usagestats.collector t=2025-06-16T07:45:48.37838544Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 07:47:34 grafana | logger=grafanaStorageLogger t=2025-06-16T07:45:48.380491812Z level=info msg="Storage starting" 07:47:34 grafana | logger=ngalert.state.manager t=2025-06-16T07:45:48.381892936Z level=info msg="Warming state cache for startup" 07:47:34 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T07:45:48.383768766Z level=info msg="Starting MultiOrg Alertmanager" 07:47:34 grafana | logger=http.server t=2025-06-16T07:45:48.385698286Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:48.38608421Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 07:47:34 grafana | logger=plugins.update.checker t=2025-06-16T07:45:48.476174135Z level=info msg="Update check succeeded" duration=93.049706ms 07:47:34 grafana | logger=grafana.update.checker t=2025-06-16T07:45:48.47862239Z level=info msg="Update check succeeded" duration=95.677743ms 07:47:34 grafana | logger=sqlstore.transactions t=2025-06-16T07:45:48.494461293Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 07:47:34 grafana | logger=sqlstore.transactions t=2025-06-16T07:45:48.495478183Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 07:47:34 grafana | logger=provisioning.datasources t=2025-06-16T07:45:48.527828946Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 07:47:34 grafana | logger=ngalert.state.manager t=2025-06-16T07:45:48.545018312Z level=info msg="State cache has been initialized" states=0 duration=163.124846ms 07:47:34 grafana | logger=ngalert.scheduler t=2025-06-16T07:45:48.545059123Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 07:47:34 grafana | logger=ticker t=2025-06-16T07:45:48.545108533Z level=info msg=starting first_tick=2025-06-16T07:45:50Z 07:47:34 grafana | logger=provisioning.alerting t=2025-06-16T07:45:48.555982925Z level=info msg="starting to provision alerting" 07:47:34 grafana | logger=provisioning.alerting t=2025-06-16T07:45:48.556097356Z level=info msg="finished to provision alerting" 07:47:34 grafana | logger=provisioning.dashboard t=2025-06-16T07:45:48.55745161Z level=info msg="starting to provision dashboards" 07:47:34 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T07:45:48.600392621Z level=info msg="Patterns update finished" duration=141.327642ms 07:47:34 grafana | logger=plugin.installer t=2025-06-16T07:45:48.762357326Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.81240795Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.813281859Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.814160778Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.814950666Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.816188159Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.818162699Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.820885777Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.821744456Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=grafana-apiserver t=2025-06-16T07:45:48.822753106Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 07:47:34 grafana | logger=installer.fs t=2025-06-16T07:45:48.830678808Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 07:47:34 grafana | logger=plugins.registration t=2025-06-16T07:45:48.860232391Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:48.860283652Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=474.165732ms 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:48.860331072Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 07:47:34 grafana | logger=app-registry t=2025-06-16T07:45:48.870989362Z level=info msg="app registry initialized" 07:47:34 grafana | logger=plugin.installer t=2025-06-16T07:45:49.062158076Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 07:47:34 grafana | logger=installer.fs t=2025-06-16T07:45:49.132778111Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 07:47:34 grafana | logger=plugins.registration t=2025-06-16T07:45:49.154161861Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:49.154188331Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=293.843768ms 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:49.154235391Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 07:47:34 grafana | logger=plugin.installer t=2025-06-16T07:45:49.427857512Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 07:47:34 grafana | logger=provisioning.dashboard t=2025-06-16T07:45:49.432661261Z level=info msg="finished to provision dashboards" 07:47:34 grafana | logger=installer.fs t=2025-06-16T07:45:49.568567477Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 07:47:34 grafana | logger=plugins.registration t=2025-06-16T07:45:49.594077829Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:49.594105629Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=439.859868ms 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:49.594135909Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 07:47:34 grafana | logger=plugin.installer t=2025-06-16T07:45:49.77436178Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 07:47:34 grafana | logger=installer.fs t=2025-06-16T07:45:49.824505385Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 07:47:34 grafana | logger=plugins.registration t=2025-06-16T07:45:49.841488249Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 07:47:34 grafana | logger=plugin.backgroundinstaller t=2025-06-16T07:45:49.84151188Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=247.369261ms 07:47:34 grafana | logger=infra.usagestats t=2025-06-16T07:46:20.387615629Z level=info msg="Usage stats are ready to report" 07:47:34 kafka | ===> User 07:47:34 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 07:47:34 kafka | ===> Configuring ... 07:47:34 kafka | Running in Zookeeper mode... 07:47:34 kafka | ===> Running preflight checks ... 07:47:34 kafka | ===> Check if /var/lib/kafka/data is writable ... 07:47:34 kafka | ===> Check if Zookeeper is healthy ... 07:47:34 kafka | [2025-06-16 07:45:48,033] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,033] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,033] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,034] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,035] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,037] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,041] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 07:47:34 kafka | [2025-06-16 07:45:48,045] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 07:47:34 kafka | [2025-06-16 07:45:48,052] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:48,087] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:48,088] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:48,098] INFO Socket connection established, initiating session, client: /172.17.0.8:34580, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:48,157] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000029a5c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:48,289] INFO Session: 0x10000029a5c0000 closed (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:48,289] INFO EventThread shut down for session: 0x10000029a5c0000 (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | Using log4j config /etc/kafka/log4j.properties 07:47:34 kafka | ===> Launching ... 07:47:34 kafka | ===> Launching kafka ... 07:47:34 kafka | [2025-06-16 07:45:49,046] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 07:47:34 kafka | [2025-06-16 07:45:49,339] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 07:47:34 kafka | [2025-06-16 07:45:49,463] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 07:47:34 kafka | [2025-06-16 07:45:49,465] INFO starting (kafka.server.KafkaServer) 07:47:34 kafka | [2025-06-16 07:45:49,465] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 07:47:34 kafka | [2025-06-16 07:45:49,478] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 07:47:34 kafka | [2025-06-16 07:45:49,481] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,481] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,481] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,481] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,482] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,484] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 07:47:34 kafka | [2025-06-16 07:45:49,488] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 07:47:34 kafka | [2025-06-16 07:45:49,494] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:49,501] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 07:47:34 kafka | [2025-06-16 07:45:49,505] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:49,512] INFO Socket connection established, initiating session, client: /172.17.0.8:34582, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:49,521] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000029a5c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 07:47:34 kafka | [2025-06-16 07:45:49,526] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 07:47:34 kafka | [2025-06-16 07:45:49,844] INFO Cluster ID = 3qbXtuCCQ9WamUW573wmtQ (kafka.server.KafkaServer) 07:47:34 kafka | [2025-06-16 07:45:49,849] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 07:47:34 kafka | [2025-06-16 07:45:49,914] INFO KafkaConfig values: 07:47:34 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 07:47:34 kafka | alter.config.policy.class.name = null 07:47:34 kafka | alter.log.dirs.replication.quota.window.num = 11 07:47:34 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 07:47:34 kafka | authorizer.class.name = 07:47:34 kafka | auto.create.topics.enable = true 07:47:34 kafka | auto.include.jmx.reporter = true 07:47:34 kafka | auto.leader.rebalance.enable = true 07:47:34 kafka | background.threads = 10 07:47:34 kafka | broker.heartbeat.interval.ms = 2000 07:47:34 kafka | broker.id = 1 07:47:34 kafka | broker.id.generation.enable = true 07:47:34 kafka | broker.rack = null 07:47:34 kafka | broker.session.timeout.ms = 9000 07:47:34 kafka | client.quota.callback.class = null 07:47:34 kafka | compression.type = producer 07:47:34 kafka | connection.failed.authentication.delay.ms = 100 07:47:34 kafka | connections.max.idle.ms = 600000 07:47:34 kafka | connections.max.reauth.ms = 0 07:47:34 kafka | control.plane.listener.name = null 07:47:34 kafka | controlled.shutdown.enable = true 07:47:34 kafka | controlled.shutdown.max.retries = 3 07:47:34 kafka | controlled.shutdown.retry.backoff.ms = 5000 07:47:34 kafka | controller.listener.names = null 07:47:34 kafka | controller.quorum.append.linger.ms = 25 07:47:34 kafka | controller.quorum.election.backoff.max.ms = 1000 07:47:34 kafka | controller.quorum.election.timeout.ms = 1000 07:47:34 kafka | controller.quorum.fetch.timeout.ms = 2000 07:47:34 kafka | controller.quorum.request.timeout.ms = 2000 07:47:34 kafka | controller.quorum.retry.backoff.ms = 20 07:47:34 kafka | controller.quorum.voters = [] 07:47:34 kafka | controller.quota.window.num = 11 07:47:34 kafka | controller.quota.window.size.seconds = 1 07:47:34 kafka | controller.socket.timeout.ms = 30000 07:47:34 kafka | create.topic.policy.class.name = null 07:47:34 kafka | default.replication.factor = 1 07:47:34 kafka | delegation.token.expiry.check.interval.ms = 3600000 07:47:34 kafka | delegation.token.expiry.time.ms = 86400000 07:47:34 kafka | delegation.token.master.key = null 07:47:34 kafka | delegation.token.max.lifetime.ms = 604800000 07:47:34 kafka | delegation.token.secret.key = null 07:47:34 kafka | delete.records.purgatory.purge.interval.requests = 1 07:47:34 kafka | delete.topic.enable = true 07:47:34 kafka | early.start.listeners = null 07:47:34 kafka | fetch.max.bytes = 57671680 07:47:34 kafka | fetch.purgatory.purge.interval.requests = 1000 07:47:34 kafka | group.initial.rebalance.delay.ms = 3000 07:47:34 kafka | group.max.session.timeout.ms = 1800000 07:47:34 kafka | group.max.size = 2147483647 07:47:34 kafka | group.min.session.timeout.ms = 6000 07:47:34 kafka | initial.broker.registration.timeout.ms = 60000 07:47:34 kafka | inter.broker.listener.name = PLAINTEXT 07:47:34 kafka | inter.broker.protocol.version = 3.4-IV0 07:47:34 kafka | kafka.metrics.polling.interval.secs = 10 07:47:34 kafka | kafka.metrics.reporters = [] 07:47:34 kafka | leader.imbalance.check.interval.seconds = 300 07:47:34 kafka | leader.imbalance.per.broker.percentage = 10 07:47:34 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 07:47:34 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 07:47:34 kafka | log.cleaner.backoff.ms = 15000 07:47:34 kafka | log.cleaner.dedupe.buffer.size = 134217728 07:47:34 kafka | log.cleaner.delete.retention.ms = 86400000 07:47:34 kafka | log.cleaner.enable = true 07:47:34 kafka | log.cleaner.io.buffer.load.factor = 0.9 07:47:34 kafka | log.cleaner.io.buffer.size = 524288 07:47:34 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 07:47:34 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 07:47:34 kafka | log.cleaner.min.cleanable.ratio = 0.5 07:47:34 kafka | log.cleaner.min.compaction.lag.ms = 0 07:47:34 kafka | log.cleaner.threads = 1 07:47:34 kafka | log.cleanup.policy = [delete] 07:47:34 kafka | log.dir = /tmp/kafka-logs 07:47:34 kafka | log.dirs = /var/lib/kafka/data 07:47:34 kafka | log.flush.interval.messages = 9223372036854775807 07:47:34 kafka | log.flush.interval.ms = null 07:47:34 kafka | log.flush.offset.checkpoint.interval.ms = 60000 07:47:34 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 07:47:34 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 07:47:34 kafka | log.index.interval.bytes = 4096 07:47:34 kafka | log.index.size.max.bytes = 10485760 07:47:34 kafka | log.message.downconversion.enable = true 07:47:34 kafka | log.message.format.version = 3.0-IV1 07:47:34 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 07:47:34 kafka | log.message.timestamp.type = CreateTime 07:47:34 kafka | log.preallocate = false 07:47:34 kafka | log.retention.bytes = -1 07:47:34 kafka | log.retention.check.interval.ms = 300000 07:47:34 kafka | log.retention.hours = 168 07:47:34 kafka | log.retention.minutes = null 07:47:34 kafka | log.retention.ms = null 07:47:34 kafka | log.roll.hours = 168 07:47:34 kafka | log.roll.jitter.hours = 0 07:47:34 kafka | log.roll.jitter.ms = null 07:47:34 kafka | log.roll.ms = null 07:47:34 kafka | log.segment.bytes = 1073741824 07:47:34 kafka | log.segment.delete.delay.ms = 60000 07:47:34 kafka | max.connection.creation.rate = 2147483647 07:47:34 kafka | max.connections = 2147483647 07:47:34 kafka | max.connections.per.ip = 2147483647 07:47:34 kafka | max.connections.per.ip.overrides = 07:47:34 kafka | max.incremental.fetch.session.cache.slots = 1000 07:47:34 kafka | message.max.bytes = 1048588 07:47:34 kafka | metadata.log.dir = null 07:47:34 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 07:47:34 kafka | metadata.log.max.snapshot.interval.ms = 3600000 07:47:34 kafka | metadata.log.segment.bytes = 1073741824 07:47:34 kafka | metadata.log.segment.min.bytes = 8388608 07:47:34 kafka | metadata.log.segment.ms = 604800000 07:47:34 kafka | metadata.max.idle.interval.ms = 500 07:47:34 kafka | metadata.max.retention.bytes = 104857600 07:47:34 kafka | metadata.max.retention.ms = 604800000 07:47:34 kafka | metric.reporters = [] 07:47:34 kafka | metrics.num.samples = 2 07:47:34 kafka | metrics.recording.level = INFO 07:47:34 kafka | metrics.sample.window.ms = 30000 07:47:34 kafka | min.insync.replicas = 1 07:47:34 kafka | node.id = 1 07:47:34 kafka | num.io.threads = 8 07:47:34 kafka | num.network.threads = 3 07:47:34 kafka | num.partitions = 1 07:47:34 kafka | num.recovery.threads.per.data.dir = 1 07:47:34 kafka | num.replica.alter.log.dirs.threads = null 07:47:34 kafka | num.replica.fetchers = 1 07:47:34 kafka | offset.metadata.max.bytes = 4096 07:47:34 kafka | offsets.commit.required.acks = -1 07:47:34 kafka | offsets.commit.timeout.ms = 5000 07:47:34 kafka | offsets.load.buffer.size = 5242880 07:47:34 kafka | offsets.retention.check.interval.ms = 600000 07:47:34 kafka | offsets.retention.minutes = 10080 07:47:34 kafka | offsets.topic.compression.codec = 0 07:47:34 kafka | offsets.topic.num.partitions = 50 07:47:34 kafka | offsets.topic.replication.factor = 1 07:47:34 kafka | offsets.topic.segment.bytes = 104857600 07:47:34 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 07:47:34 kafka | password.encoder.iterations = 4096 07:47:34 kafka | password.encoder.key.length = 128 07:47:34 kafka | password.encoder.keyfactory.algorithm = null 07:47:34 kafka | password.encoder.old.secret = null 07:47:34 kafka | password.encoder.secret = null 07:47:34 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 07:47:34 kafka | process.roles = [] 07:47:34 kafka | producer.id.expiration.check.interval.ms = 600000 07:47:34 kafka | producer.id.expiration.ms = 86400000 07:47:34 kafka | producer.purgatory.purge.interval.requests = 1000 07:47:34 kafka | queued.max.request.bytes = -1 07:47:34 kafka | queued.max.requests = 500 07:47:34 kafka | quota.window.num = 11 07:47:34 kafka | quota.window.size.seconds = 1 07:47:34 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 07:47:34 kafka | remote.log.manager.task.interval.ms = 30000 07:47:34 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 07:47:34 kafka | remote.log.manager.task.retry.backoff.ms = 500 07:47:34 kafka | remote.log.manager.task.retry.jitter = 0.2 07:47:34 kafka | remote.log.manager.thread.pool.size = 10 07:47:34 kafka | remote.log.metadata.manager.class.name = null 07:47:34 kafka | remote.log.metadata.manager.class.path = null 07:47:34 kafka | remote.log.metadata.manager.impl.prefix = null 07:47:34 kafka | remote.log.metadata.manager.listener.name = null 07:47:34 kafka | remote.log.reader.max.pending.tasks = 100 07:47:34 kafka | remote.log.reader.threads = 10 07:47:34 kafka | remote.log.storage.manager.class.name = null 07:47:34 kafka | remote.log.storage.manager.class.path = null 07:47:34 kafka | remote.log.storage.manager.impl.prefix = null 07:47:34 kafka | remote.log.storage.system.enable = false 07:47:34 kafka | replica.fetch.backoff.ms = 1000 07:47:34 kafka | replica.fetch.max.bytes = 1048576 07:47:34 kafka | replica.fetch.min.bytes = 1 07:47:34 kafka | replica.fetch.response.max.bytes = 10485760 07:47:34 kafka | replica.fetch.wait.max.ms = 500 07:47:34 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 07:47:34 kafka | replica.lag.time.max.ms = 30000 07:47:34 kafka | replica.selector.class = null 07:47:34 kafka | replica.socket.receive.buffer.bytes = 65536 07:47:34 kafka | replica.socket.timeout.ms = 30000 07:47:34 kafka | replication.quota.window.num = 11 07:47:34 kafka | replication.quota.window.size.seconds = 1 07:47:34 kafka | request.timeout.ms = 30000 07:47:34 kafka | reserved.broker.max.id = 1000 07:47:34 kafka | sasl.client.callback.handler.class = null 07:47:34 kafka | sasl.enabled.mechanisms = [GSSAPI] 07:47:34 kafka | sasl.jaas.config = null 07:47:34 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:34 kafka | sasl.kerberos.min.time.before.relogin = 60000 07:47:34 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 07:47:34 kafka | sasl.kerberos.service.name = null 07:47:34 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:34 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:34 kafka | sasl.login.callback.handler.class = null 07:47:34 kafka | sasl.login.class = null 07:47:34 kafka | sasl.login.connect.timeout.ms = null 07:47:34 kafka | sasl.login.read.timeout.ms = null 07:47:34 kafka | sasl.login.refresh.buffer.seconds = 300 07:47:34 kafka | sasl.login.refresh.min.period.seconds = 60 07:47:34 kafka | sasl.login.refresh.window.factor = 0.8 07:47:34 kafka | sasl.login.refresh.window.jitter = 0.05 07:47:34 kafka | sasl.login.retry.backoff.max.ms = 10000 07:47:34 kafka | sasl.login.retry.backoff.ms = 100 07:47:34 kafka | sasl.mechanism.controller.protocol = GSSAPI 07:47:34 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 07:47:34 kafka | sasl.oauthbearer.clock.skew.seconds = 30 07:47:34 kafka | sasl.oauthbearer.expected.audience = null 07:47:34 kafka | sasl.oauthbearer.expected.issuer = null 07:47:34 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:34 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:34 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:34 kafka | sasl.oauthbearer.jwks.endpoint.url = null 07:47:34 kafka | sasl.oauthbearer.scope.claim.name = scope 07:47:34 kafka | sasl.oauthbearer.sub.claim.name = sub 07:47:34 kafka | sasl.oauthbearer.token.endpoint.url = null 07:47:34 kafka | sasl.server.callback.handler.class = null 07:47:34 kafka | sasl.server.max.receive.size = 524288 07:47:34 kafka | security.inter.broker.protocol = PLAINTEXT 07:47:34 kafka | security.providers = null 07:47:34 kafka | socket.connection.setup.timeout.max.ms = 30000 07:47:34 kafka | socket.connection.setup.timeout.ms = 10000 07:47:34 kafka | socket.listen.backlog.size = 50 07:47:34 kafka | socket.receive.buffer.bytes = 102400 07:47:34 kafka | socket.request.max.bytes = 104857600 07:47:34 kafka | socket.send.buffer.bytes = 102400 07:47:34 kafka | ssl.cipher.suites = [] 07:47:34 kafka | ssl.client.auth = none 07:47:34 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:34 kafka | ssl.endpoint.identification.algorithm = https 07:47:34 kafka | ssl.engine.factory.class = null 07:47:34 kafka | ssl.key.password = null 07:47:34 kafka | ssl.keymanager.algorithm = SunX509 07:47:34 kafka | ssl.keystore.certificate.chain = null 07:47:34 kafka | ssl.keystore.key = null 07:47:34 kafka | ssl.keystore.location = null 07:47:34 kafka | ssl.keystore.password = null 07:47:34 kafka | ssl.keystore.type = JKS 07:47:34 kafka | ssl.principal.mapping.rules = DEFAULT 07:47:34 kafka | ssl.protocol = TLSv1.3 07:47:34 kafka | ssl.provider = null 07:47:34 kafka | ssl.secure.random.implementation = null 07:47:34 kafka | ssl.trustmanager.algorithm = PKIX 07:47:34 kafka | ssl.truststore.certificates = null 07:47:34 kafka | ssl.truststore.location = null 07:47:34 kafka | ssl.truststore.password = null 07:47:34 kafka | ssl.truststore.type = JKS 07:47:34 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 07:47:34 kafka | transaction.max.timeout.ms = 900000 07:47:34 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 07:47:34 kafka | transaction.state.log.load.buffer.size = 5242880 07:47:34 kafka | transaction.state.log.min.isr = 2 07:47:34 kafka | transaction.state.log.num.partitions = 50 07:47:34 kafka | transaction.state.log.replication.factor = 3 07:47:34 kafka | transaction.state.log.segment.bytes = 104857600 07:47:34 kafka | transactional.id.expiration.ms = 604800000 07:47:34 kafka | unclean.leader.election.enable = false 07:47:34 kafka | zookeeper.clientCnxnSocket = null 07:47:34 kafka | zookeeper.connect = zookeeper:2181 07:47:34 kafka | zookeeper.connection.timeout.ms = null 07:47:34 kafka | zookeeper.max.in.flight.requests = 10 07:47:34 kafka | zookeeper.metadata.migration.enable = false 07:47:34 kafka | zookeeper.session.timeout.ms = 18000 07:47:34 kafka | zookeeper.set.acl = false 07:47:34 kafka | zookeeper.ssl.cipher.suites = null 07:47:34 kafka | zookeeper.ssl.client.enable = false 07:47:34 kafka | zookeeper.ssl.crl.enable = false 07:47:34 kafka | zookeeper.ssl.enabled.protocols = null 07:47:34 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 07:47:34 kafka | zookeeper.ssl.keystore.location = null 07:47:34 kafka | zookeeper.ssl.keystore.password = null 07:47:34 kafka | zookeeper.ssl.keystore.type = null 07:47:34 kafka | zookeeper.ssl.ocsp.enable = false 07:47:34 kafka | zookeeper.ssl.protocol = TLSv1.2 07:47:34 kafka | zookeeper.ssl.truststore.location = null 07:47:34 kafka | zookeeper.ssl.truststore.password = null 07:47:34 kafka | zookeeper.ssl.truststore.type = null 07:47:34 kafka | (kafka.server.KafkaConfig) 07:47:34 kafka | [2025-06-16 07:45:49,955] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 07:47:34 kafka | [2025-06-16 07:45:49,955] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 07:47:34 kafka | [2025-06-16 07:45:49,956] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 07:47:34 kafka | [2025-06-16 07:45:49,958] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 07:47:34 kafka | [2025-06-16 07:45:49,997] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:45:50,003] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:45:50,017] INFO Loaded 0 logs in 20ms. (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:45:50,018] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:45:50,020] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:45:50,033] INFO Starting the log cleaner (kafka.log.LogCleaner) 07:47:34 kafka | [2025-06-16 07:45:50,085] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 07:47:34 kafka | [2025-06-16 07:45:50,103] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 07:47:34 kafka | [2025-06-16 07:45:50,119] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 07:47:34 kafka | [2025-06-16 07:45:50,170] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 07:47:34 kafka | [2025-06-16 07:45:50,522] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 07:47:34 kafka | [2025-06-16 07:45:50,526] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 07:47:34 kafka | [2025-06-16 07:45:50,548] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 07:47:34 kafka | [2025-06-16 07:45:50,549] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 07:47:34 kafka | [2025-06-16 07:45:50,549] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 07:47:34 kafka | [2025-06-16 07:45:50,554] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 07:47:34 kafka | [2025-06-16 07:45:50,558] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 07:47:34 kafka | [2025-06-16 07:45:50,577] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,579] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,586] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,586] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,599] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 07:47:34 kafka | [2025-06-16 07:45:50,621] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 07:47:34 kafka | [2025-06-16 07:45:50,653] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750059950638,1750059950638,1,0,0,72057605217583105,258,0,27 07:47:34 kafka | (kafka.zk.KafkaZkClient) 07:47:34 kafka | [2025-06-16 07:45:50,654] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 07:47:34 kafka | [2025-06-16 07:45:50,711] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 07:47:34 kafka | [2025-06-16 07:45:50,723] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,733] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 07:47:34 kafka | [2025-06-16 07:45:50,744] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,744] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,756] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,761] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,767] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 07:47:34 kafka | [2025-06-16 07:45:50,781] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:45:50,788] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:45:50,807] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 07:47:34 kafka | [2025-06-16 07:45:50,811] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 07:47:34 kafka | [2025-06-16 07:45:50,812] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 07:47:34 kafka | [2025-06-16 07:45:50,818] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 07:47:34 kafka | [2025-06-16 07:45:50,818] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,830] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,834] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,836] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,847] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 07:47:34 kafka | [2025-06-16 07:45:50,852] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,859] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,865] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 07:47:34 kafka | [2025-06-16 07:45:50,873] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 07:47:34 kafka | [2025-06-16 07:45:50,881] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 07:47:34 kafka | [2025-06-16 07:45:50,882] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,883] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,884] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,885] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,887] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 07:47:34 kafka | [2025-06-16 07:45:50,888] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,888] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,889] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,889] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 07:47:34 kafka | [2025-06-16 07:45:50,890] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,894] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:45:50,902] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 07:47:34 kafka | [2025-06-16 07:45:50,902] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 07:47:34 kafka | [2025-06-16 07:45:50,902] INFO Kafka startTimeMs: 1750059950893 (org.apache.kafka.common.utils.AppInfoParser) 07:47:34 kafka | [2025-06-16 07:45:50,904] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 07:47:34 kafka | [2025-06-16 07:45:50,906] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,906] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,911] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,911] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,912] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,914] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,918] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 07:47:34 kafka | [2025-06-16 07:45:50,918] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,919] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 07:47:34 kafka | [2025-06-16 07:45:50,925] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,925] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,931] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,935] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,942] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,962] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:50,990] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:45:50,993] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 07:47:34 kafka | [2025-06-16 07:45:51,063] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 07:47:34 kafka | [2025-06-16 07:45:55,964] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:45:55,964] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:46:19,356] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:46:19,365] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 07:47:34 kafka | [2025-06-16 07:46:19,366] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 07:47:34 kafka | [2025-06-16 07:46:19,384] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:46:19,431] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(VVWEYQqiRD-ZFbWV35Ulow),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(CqCk0aD-RdOscpquAPRaLw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:46:19,437] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 07:47:34 kafka | [2025-06-16 07:46:19,440] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,440] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,443] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,450] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,451] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,452] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,453] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,453] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,453] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,605] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,606] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,607] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,613] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,614] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,615] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,620] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,622] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,625] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,628] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,632] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,636] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,636] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,637] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,638] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,639] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,640] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,679] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,680] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,681] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 07:47:34 kafka | [2025-06-16 07:46:19,682] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,737] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,751] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,753] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,754] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,758] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,775] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,776] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,776] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,777] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,777] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,788] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,789] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,789] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,789] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,789] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,821] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,822] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,822] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,822] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,822] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,833] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,834] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,834] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,834] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,834] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,842] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,842] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,842] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,842] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,843] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,852] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,853] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,853] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,853] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,853] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,865] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,866] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,866] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,866] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,866] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,876] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,877] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,877] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,877] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,877] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,891] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,892] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,892] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,892] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,892] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,900] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,901] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,901] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,901] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,901] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,909] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,910] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,910] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,910] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,911] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,919] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,920] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,920] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,920] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,920] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,926] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,927] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,927] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,927] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,927] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,935] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,936] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,937] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,937] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,937] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,944] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,945] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,945] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,945] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,945] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,953] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,953] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,953] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,953] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,954] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,962] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,962] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,962] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,962] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,962] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,975] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,976] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,976] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,976] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,976] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,983] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,984] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,984] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,984] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,984] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,990] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,991] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,991] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,991] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,991] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:19,998] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:19,999] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:19,999] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,999] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:19,999] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,005] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,006] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,006] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,006] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,006] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,036] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,037] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,037] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,037] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,037] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,046] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,048] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,048] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,049] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,050] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,061] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,065] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,067] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,067] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,068] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,074] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,075] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,075] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,075] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,075] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,085] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,086] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,086] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,086] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,086] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,096] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,097] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,097] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,097] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,097] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,106] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,109] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,110] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,110] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,110] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,130] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,131] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,131] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,132] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,133] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,142] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,143] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,143] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,143] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,143] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,152] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,152] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,152] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,152] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,153] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,162] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,163] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,163] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,163] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,163] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,171] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,172] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,172] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,172] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,172] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(VVWEYQqiRD-ZFbWV35Ulow) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,180] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,180] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,180] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,181] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,181] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,187] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,188] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,188] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,188] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,188] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,196] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,197] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,197] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,197] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,197] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,204] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,204] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,205] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,205] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,205] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,211] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,212] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,212] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,212] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,212] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,219] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,220] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,220] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,220] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,220] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,226] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,227] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,227] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,227] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,227] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,258] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,259] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,260] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,260] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,260] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,267] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,268] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,268] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,268] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,268] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,275] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,276] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,276] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,276] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,276] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,283] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,283] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,283] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,283] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,284] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,290] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,291] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,291] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,291] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,291] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,299] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,300] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,300] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,300] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,300] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,308] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,309] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,309] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,309] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,309] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,315] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,316] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,316] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,316] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,316] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,323] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 07:47:34 kafka | [2025-06-16 07:46:20,324] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 07:47:34 kafka | [2025-06-16 07:46:20,324] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,324] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 07:47:34 kafka | [2025-06-16 07:46:20,324] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(CqCk0aD-RdOscpquAPRaLw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,329] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,330] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,338] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,339] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,340] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,341] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,342] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,343] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,344] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,345] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,346] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,347] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:20,347] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,350] INFO [Broker id=1] Finished LeaderAndIsr request in 719ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,351] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,360] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,361] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,362] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,362] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=CqCk0aD-RdOscpquAPRaLw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=VVWEYQqiRD-ZFbWV35Ulow, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 25 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 26 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 27 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 27 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 28 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 30 milliseconds for epoch 0, of which 30 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,373] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,374] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,374] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,375] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,376] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,377] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,378] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,379] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,380] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,380] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,380] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,380] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,380] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,380] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,380] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,380] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 37 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 37 milliseconds for epoch 0, of which 37 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,381] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 36 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,381] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,381] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,381] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,382] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,382] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,383] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,383] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 38 milliseconds for epoch 0, of which 36 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,384] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 07:47:34 kafka | [2025-06-16 07:46:20,384] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 39 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,384] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 39 milliseconds for epoch 0, of which 39 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,384] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 38 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,384] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 38 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,385] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 39 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,385] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 39 milliseconds for epoch 0, of which 39 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,385] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 39 milliseconds for epoch 0, of which 39 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,385] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 39 milliseconds for epoch 0, of which 39 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,385] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 38 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:20,386] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 39 milliseconds for epoch 0, of which 38 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 07:47:34 kafka | [2025-06-16 07:46:21,153] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:21,174] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c) (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:21,189] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 3eb43c00-b034-4edb-9227-bcdf22a1f069 in Empty state. Created a new member id consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:21,191] INFO [GroupCoordinator 1]: Preparing to rebalance group 3eb43c00-b034-4edb-9227-bcdf22a1f069 in state PreparingRebalance with old generation 0 (__consumer_offsets-4) (reason: Adding new member consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc with group instance id None; client reason: need to re-join with the given member-id: consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc) (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:22,297] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a in Empty state. Created a new member id consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:22,304] INFO [GroupCoordinator 1]: Preparing to rebalance group 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a in state PreparingRebalance with old generation 0 (__consumer_offsets-11) (reason: Adding new member consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b with group instance id None; client reason: need to re-join with the given member-id: consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b) (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:24,187] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:24,192] INFO [GroupCoordinator 1]: Stabilized group 3eb43c00-b034-4edb-9227-bcdf22a1f069 generation 1 (__consumer_offsets-4) with 1 members (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:24,212] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:24,212] INFO [GroupCoordinator 1]: Assignment received from leader consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc for group 3eb43c00-b034-4edb-9227-bcdf22a1f069 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:25,305] INFO [GroupCoordinator 1]: Stabilized group 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a generation 1 (__consumer_offsets-11) with 1 members (kafka.coordinator.group.GroupCoordinator) 07:47:34 kafka | [2025-06-16 07:46:25,327] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b for group 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 07:47:35 policy-api | Waiting for policy-db-migrator port 6824... 07:47:35 policy-api | policy-db-migrator (172.17.0.6:6824) open 07:47:35 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 07:47:35 policy-api | 07:47:35 policy-api | . ____ _ __ _ _ 07:47:35 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 07:47:35 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 07:47:35 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 07:47:35 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 07:47:35 policy-api | =========|_|==============|___/=/_/_/_/ 07:47:35 policy-api | 07:47:35 policy-api | :: Spring Boot :: (v3.4.6) 07:47:35 policy-api | 07:47:35 policy-api | [2025-06-16T07:45:58.600+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 07:47:35 policy-api | [2025-06-16T07:45:58.659+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) 07:47:35 policy-api | [2025-06-16T07:45:58.660+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 07:47:35 policy-api | [2025-06-16T07:46:00.093+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 07:47:35 policy-api | [2025-06-16T07:46:00.264+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 160 ms. Found 6 JPA repository interfaces. 07:47:35 policy-api | [2025-06-16T07:46:00.962+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 07:47:35 policy-api | [2025-06-16T07:46:00.976+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 07:47:35 policy-api | [2025-06-16T07:46:00.978+00:00|INFO|StandardService|main] Starting service [Tomcat] 07:47:35 policy-api | [2025-06-16T07:46:00.978+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 07:47:35 policy-api | [2025-06-16T07:46:01.020+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 07:47:35 policy-api | [2025-06-16T07:46:01.021+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2307 ms 07:47:35 policy-api | [2025-06-16T07:46:01.346+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 07:47:35 policy-api | [2025-06-16T07:46:01.430+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 07:47:35 policy-api | [2025-06-16T07:46:01.479+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 07:47:35 policy-api | [2025-06-16T07:46:01.841+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 07:47:35 policy-api | [2025-06-16T07:46:01.879+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 07:47:35 policy-api | [2025-06-16T07:46:02.079+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd 07:47:35 policy-api | [2025-06-16T07:46:02.081+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 07:47:35 policy-api | [2025-06-16T07:46:02.160+00:00|INFO|pooling|main] HHH10001005: Database info: 07:47:35 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 07:47:35 policy-api | Database driver: undefined/unknown 07:47:35 policy-api | Database version: 16.4 07:47:35 policy-api | Autocommit mode: undefined/unknown 07:47:35 policy-api | Isolation level: undefined/unknown 07:47:35 policy-api | Minimum pool size: undefined/unknown 07:47:35 policy-api | Maximum pool size: undefined/unknown 07:47:35 policy-api | [2025-06-16T07:46:04.058+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 07:47:35 policy-api | [2025-06-16T07:46:04.062+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 07:47:35 policy-api | [2025-06-16T07:46:04.663+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 07:47:35 policy-api | [2025-06-16T07:46:05.499+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 07:47:35 policy-api | [2025-06-16T07:46:06.607+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 07:47:35 policy-api | [2025-06-16T07:46:06.652+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 07:47:35 policy-api | [2025-06-16T07:46:07.310+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 07:47:35 policy-api | [2025-06-16T07:46:07.446+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 07:47:35 policy-api | [2025-06-16T07:46:07.464+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 07:47:35 policy-api | [2025-06-16T07:46:07.487+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.542 seconds (process running for 10.149) 07:47:35 policy-api | [2025-06-16T07:46:39.918+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 07:47:35 policy-api | [2025-06-16T07:46:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 07:47:35 policy-api | [2025-06-16T07:46:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 07:47:35 policy-csit | Invoking the robot tests from: drools-pdp-test.robot 07:47:35 policy-csit | Run Robot test 07:47:35 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 07:47:35 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 07:47:35 policy-csit | -v POLICY_API_IP:policy-api:6969 07:47:35 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 07:47:35 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 07:47:35 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 07:47:35 policy-csit | -v APEX_IP:policy-apex-pdp:6969 07:47:35 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 07:47:35 policy-csit | -v KAFKA_IP:kafka:9092 07:47:35 policy-csit | -v PROMETHEUS_IP:prometheus:9090 07:47:35 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 07:47:35 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 07:47:35 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 07:47:35 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 07:47:35 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 07:47:35 policy-csit | -v TEMP_FOLDER:/tmp/distribution 07:47:35 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 07:47:35 policy-csit | -v TEST_ENV:docker 07:47:35 policy-csit | -v JAEGER_IP:jaeger:16686 07:47:35 policy-csit | Starting Robot test suites ... 07:47:35 policy-csit | ============================================================================== 07:47:35 policy-csit | Drools-Pdp-Test 07:47:35 policy-csit | ============================================================================== 07:47:35 policy-csit | Alive :: Runs Policy PDP Alive Check | PASS | 07:47:35 policy-csit | ------------------------------------------------------------------------------ 07:47:35 policy-csit | Metrics :: Verify drools-pdp is exporting metrics | PASS | 07:47:35 policy-csit | ------------------------------------------------------------------------------ 07:47:35 policy-csit | Drools-Pdp-Test | PASS | 07:47:35 policy-csit | 2 tests, 2 passed, 0 failed 07:47:35 policy-csit | ============================================================================== 07:47:35 policy-csit | Output: /tmp/results/output.xml 07:47:35 policy-csit | Log: /tmp/results/log.html 07:47:35 policy-csit | Report: /tmp/results/report.html 07:47:35 policy-csit | RESULT: 0 07:47:35 policy-db-migrator | Waiting for postgres port 5432... 07:47:35 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 07:47:35 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 07:47:35 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 07:47:35 policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! 07:47:35 policy-db-migrator | Initializing policyadmin... 07:47:35 policy-db-migrator | 321 blocks 07:47:35 policy-db-migrator | Preparing upgrade release version: 0800 07:47:35 policy-db-migrator | Preparing upgrade release version: 0900 07:47:35 policy-db-migrator | Preparing upgrade release version: 1000 07:47:35 policy-db-migrator | Preparing upgrade release version: 1100 07:47:35 policy-db-migrator | Preparing upgrade release version: 1200 07:47:35 policy-db-migrator | Preparing upgrade release version: 1300 07:47:35 policy-db-migrator | Done 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | -------------+--------- 07:47:35 policy-db-migrator | policyadmin | 0 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 07:47:35 policy-db-migrator | (0 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | upgrade: 0 -> 1300 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0450-pdpgroup.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0470-pdp.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0570-toscadatatype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0630-toscanodetype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0660-toscaparameter.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0670-toscapolicies.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0690-toscapolicy.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0730-toscaproperty.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0770-toscarequirement.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0780-toscarequirements.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0820-toscatrigger.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-pdp.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0210-sequence.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0220-sequence.sql 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0120-toscatrigger.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0140-toscaparameter.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0150-toscaproperty.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-upgrade.sql 07:47:35 policy-db-migrator | msg 07:47:35 policy-db-migrator | --------------------------- 07:47:35 policy-db-migrator | upgrade to 1100 completed 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 07:47:35 policy-db-migrator | DROP INDEX 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0120-audit_sequence.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 07:47:35 policy-db-migrator | DROP TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | policyadmin: OK: upgrade (1300) 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | -------------+--------- 07:47:35 policy-db-migrator | policyadmin | 1300 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 07:47:35 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:45.899933 07:47:35 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:45.954323 07:47:35 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.009778 07:47:35 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.05943 07:47:35 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.110767 07:47:35 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.177494 07:47:35 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.224951 07:47:35 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.278869 07:47:35 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.32748 07:47:35 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.390617 07:47:35 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.438333 07:47:35 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.481073 07:47:35 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.5302 07:47:35 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.579227 07:47:35 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.631148 07:47:35 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.682091 07:47:35 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.738235 07:47:35 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.797286 07:47:35 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.849374 07:47:35 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.895194 07:47:35 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.937408 07:47:35 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:46.985663 07:47:35 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.039638 07:47:35 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.091753 07:47:35 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.141077 07:47:35 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.198237 07:47:35 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.259053 07:47:35 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.303423 07:47:35 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.351349 07:47:35 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.39492 07:47:35 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.441199 07:47:35 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.525927 07:47:35 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.579054 07:47:35 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.632881 07:47:35 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.689401 07:47:35 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.738194 07:47:35 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.785775 07:47:35 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.842162 07:47:35 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.8947 07:47:35 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:47.966412 07:47:35 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.028539 07:47:35 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.08731 07:47:35 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.160355 07:47:35 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.217764 07:47:35 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.272501 07:47:35 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.327231 07:47:35 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.392802 07:47:35 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.444908 07:47:35 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.494192 07:47:35 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.545561 07:47:35 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.611174 07:47:35 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.665946 07:47:35 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.715515 07:47:35 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.768891 07:47:35 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.832116 07:47:35 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.887115 07:47:35 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.939283 07:47:35 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:48.987835 07:47:35 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.055036 07:47:35 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.104275 07:47:35 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.151142 07:47:35 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.207877 07:47:35 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.277785 07:47:35 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.331533 07:47:35 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.395552 07:47:35 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.450878 07:47:35 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.515768 07:47:35 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.570014 07:47:35 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.619173 07:47:35 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.668282 07:47:35 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.722744 07:47:35 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.793191 07:47:35 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.850641 07:47:35 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.907701 07:47:35 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:49.960517 07:47:35 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.010346 07:47:35 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.058075 07:47:35 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.113339 07:47:35 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.187966 07:47:35 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.242385 07:47:35 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.297354 07:47:35 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.372165 07:47:35 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.423017 07:47:35 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.474539 07:47:35 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.526226 07:47:35 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.594035 07:47:35 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.639893 07:47:35 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.687783 07:47:35 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.737876 07:47:35 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.786322 07:47:35 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.848379 07:47:35 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.897203 07:47:35 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.94553 07:47:35 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:50.991693 07:47:35 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:51.081622 07:47:35 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606250745450800u | 1 | 2025-06-16 07:45:51.141933 07:47:35 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.193061 07:47:35 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.245464 07:47:35 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.29694 07:47:35 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.34728 07:47:35 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.401767 07:47:35 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.454016 07:47:35 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.501415 07:47:35 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.553413 07:47:35 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.609104 07:47:35 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.669056 07:47:35 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.726521 07:47:35 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.781641 07:47:35 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606250745450900u | 1 | 2025-06-16 07:45:51.835565 07:47:35 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:51.885387 07:47:35 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:51.941593 07:47:35 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:51.988663 07:47:35 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.039819 07:47:35 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.095712 07:47:35 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.155963 07:47:35 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.214898 07:47:35 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.270988 07:47:35 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606250745451000u | 1 | 2025-06-16 07:45:52.331402 07:47:35 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606250745451100u | 1 | 2025-06-16 07:45:52.380851 07:47:35 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606250745451200u | 1 | 2025-06-16 07:45:52.433184 07:47:35 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606250745451200u | 1 | 2025-06-16 07:45:52.49017 07:47:35 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606250745451200u | 1 | 2025-06-16 07:45:52.557655 07:47:35 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606250745451200u | 1 | 2025-06-16 07:45:52.617361 07:47:35 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606250745451300u | 1 | 2025-06-16 07:45:52.673406 07:47:35 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606250745451300u | 1 | 2025-06-16 07:45:52.725472 07:47:35 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606250745451300u | 1 | 2025-06-16 07:45:52.782156 07:47:35 policy-db-migrator | (126 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | policyadmin: OK @ 1300 07:47:35 policy-db-migrator | Initializing clampacm... 07:47:35 policy-db-migrator | 97 blocks 07:47:35 policy-db-migrator | Preparing upgrade release version: 1400 07:47:35 policy-db-migrator | Preparing upgrade release version: 1500 07:47:35 policy-db-migrator | Preparing upgrade release version: 1600 07:47:35 policy-db-migrator | Preparing upgrade release version: 1601 07:47:35 policy-db-migrator | Preparing upgrade release version: 1700 07:47:35 policy-db-migrator | Preparing upgrade release version: 1701 07:47:35 policy-db-migrator | Done 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | ----------+--------- 07:47:35 policy-db-migrator | clampacm | 0 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 07:47:35 policy-db-migrator | (0 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | upgrade: 0 -> 1701 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-automationcomposition.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0500-participant.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-automationcomposition.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0300-participantreplica.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0400-participant.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-automationcomposition.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-automationcomposition.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-message.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-messagejob.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0200-automationcomposition.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0800-participantreplica.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | UPDATE 0 07:47:35 policy-db-migrator | ALTER TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | clampacm: OK: upgrade (1701) 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | ----------+--------- 07:47:35 policy-db-migrator | clampacm | 1701 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 07:47:35 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.45549 07:47:35 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.512931 07:47:35 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.567247 07:47:35 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.633142 07:47:35 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.689595 07:47:35 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.745838 07:47:35 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.79892 07:47:35 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.851318 07:47:35 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.903787 07:47:35 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:53.957146 07:47:35 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:54.014195 07:47:35 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:54.070628 07:47:35 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606250745531400u | 1 | 2025-06-16 07:45:54.119731 07:47:35 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.170533 07:47:35 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.225273 07:47:35 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.290109 07:47:35 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.343631 07:47:35 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.401362 07:47:35 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.457928 07:47:35 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.522453 07:47:35 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606250745531500u | 1 | 2025-06-16 07:45:54.571015 07:47:35 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606250745531600u | 1 | 2025-06-16 07:45:54.625694 07:47:35 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606250745531600u | 1 | 2025-06-16 07:45:54.676603 07:47:35 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606250745531601u | 1 | 2025-06-16 07:45:54.750202 07:47:35 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606250745531601u | 1 | 2025-06-16 07:45:54.800736 07:47:35 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606250745531700u | 1 | 2025-06-16 07:45:54.857521 07:47:35 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606250745531700u | 1 | 2025-06-16 07:45:54.916793 07:47:35 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606250745531700u | 1 | 2025-06-16 07:45:54.96834 07:47:35 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.027586 07:47:35 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.082171 07:47:35 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.151932 07:47:35 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.206757 07:47:35 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.263537 07:47:35 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.313196 07:47:35 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.371918 07:47:35 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.421946 07:47:35 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606250745531701u | 1 | 2025-06-16 07:45:55.473519 07:47:35 policy-db-migrator | (37 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | clampacm: OK @ 1701 07:47:35 policy-db-migrator | Initializing pooling... 07:47:35 policy-db-migrator | 4 blocks 07:47:35 policy-db-migrator | Preparing upgrade release version: 1600 07:47:35 policy-db-migrator | Done 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | ---------+--------- 07:47:35 policy-db-migrator | pooling | 0 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 07:47:35 policy-db-migrator | (0 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | pooling: upgrade available: 0 -> 1600 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | upgrade: 0 -> 1600 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-distributed.locking.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | pooling: OK: upgrade (1600) 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | ---------+--------- 07:47:35 policy-db-migrator | pooling | 1600 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 07:47:35 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606250745561600u | 1 | 2025-06-16 07:45:56.165587 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | pooling: OK @ 1600 07:47:35 policy-db-migrator | Initializing operationshistory... 07:47:35 policy-db-migrator | 6 blocks 07:47:35 policy-db-migrator | Preparing upgrade release version: 1600 07:47:35 policy-db-migrator | Done 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | -------------------+--------- 07:47:35 policy-db-migrator | operationshistory | 0 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 07:47:35 policy-db-migrator | (0 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | upgrade: 0 -> 1600 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | rc=0 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | > upgrade 0110-operationshistory.sql 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | CREATE INDEX 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | INSERT 0 1 07:47:35 policy-db-migrator | operationshistory: OK: upgrade (1600) 07:47:35 policy-db-migrator | List of databases 07:47:35 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 07:47:35 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 07:47:35 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 07:47:35 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 07:47:35 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 07:47:35 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 07:47:35 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 07:47:35 policy-db-migrator | (9 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 07:47:35 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 07:47:35 policy-db-migrator | CREATE TABLE 07:47:35 policy-db-migrator | name | version 07:47:35 policy-db-migrator | -------------------+--------- 07:47:35 policy-db-migrator | operationshistory | 1600 07:47:35 policy-db-migrator | (1 row) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 07:47:35 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 07:47:35 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606250745561600u | 1 | 2025-06-16 07:45:56.846424 07:47:35 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606250745561600u | 1 | 2025-06-16 07:45:56.935083 07:47:35 policy-db-migrator | (2 rows) 07:47:35 policy-db-migrator | 07:47:35 policy-db-migrator | operationshistory: OK @ 1600 07:47:35 policy-drools-pdp | Waiting for pap port 6969... 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 07:47:35 policy-drools-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 07:47:35 policy-drools-pdp | Waiting for kafka port 9092... 07:47:35 policy-drools-pdp | Connection to kafka (172.17.0.8) 9092 port [tcp/*] succeeded! 07:47:35 policy-drools-pdp | -- /opt/app/policy/bin/pdpd-entrypoint.sh boot -- 07:47:35 policy-drools-pdp | -- dockerBoot -- 07:47:35 policy-drools-pdp | -- configure -- 07:47:35 policy-drools-pdp | + operation=boot 07:47:35 policy-drools-pdp | + dockerBoot 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- dockerBoot --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + set -e 07:47:35 policy-drools-pdp | + configure 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- configure --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + reload 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | -- reload -- 07:47:35 policy-drools-pdp | -- systemConfs -- 07:47:35 policy-drools-pdp | + echo '-- reload --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + systemConfs 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- systemConfs --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + local confName 07:47:35 policy-drools-pdp | + ls '/tmp/policy-install/config/*.conf' 07:47:35 policy-drools-pdp | -- maven -- 07:47:35 policy-drools-pdp | + return 0 07:47:35 policy-drools-pdp | -- features -- 07:47:35 policy-drools-pdp | + maven 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- maven --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + '[' -f /tmp/policy-install/config/settings.xml ] 07:47:35 policy-drools-pdp | + '[' -f /tmp/policy-install/config/standalone-settings.xml ] 07:47:35 policy-drools-pdp | + features 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- features --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + ls '/tmp/policy-install/config/features*.zip' 07:47:35 policy-drools-pdp | + return 0 07:47:35 policy-drools-pdp | + security 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- security --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-keystore ] 07:47:35 policy-drools-pdp | + '[' -f /tmp/policy-install/config/policy-truststore ] 07:47:35 policy-drools-pdp | + serverConfig properties 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- serverConfig --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + local 'configExtSuffix=properties' 07:47:35 policy-drools-pdp | -- security -- 07:47:35 policy-drools-pdp | -- serverConfig -- 07:47:35 policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties 07:47:35 policy-drools-pdp | + ls /tmp/policy-install/config/engine-system.properties 07:47:35 policy-drools-pdp | configuration properties: /tmp/policy-install/config/engine-system.properties 07:47:35 policy-drools-pdp | + echo 'configuration properties: /tmp/policy-install/config/engine-system.properties' 07:47:35 policy-drools-pdp | + cp -f /tmp/policy-install/config/engine-system.properties /opt/app/policy/config 07:47:35 policy-drools-pdp | + serverConfig xml 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- serverConfig --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + local 'configExtSuffix=xml' 07:47:35 policy-drools-pdp | -- serverConfig -- 07:47:35 policy-drools-pdp | + ls '/tmp/policy-install/config/*.xml' 07:47:35 policy-drools-pdp | + return 0 07:47:35 policy-drools-pdp | + serverConfig json 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- serverConfig --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + local 'configExtSuffix=json' 07:47:35 policy-drools-pdp | -- serverConfig -- 07:47:35 policy-drools-pdp | + ls '/tmp/policy-install/config/*.json' 07:47:35 policy-drools-pdp | + return 0 07:47:35 policy-drools-pdp | + scripts pre.sh 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- scripts --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + local 'scriptExtSuffix=pre.sh' 07:47:35 policy-drools-pdp | -- scripts -- 07:47:35 policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh 07:47:35 policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh 07:47:35 policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' 07:47:35 policy-drools-pdp | + '[' -z /opt/app/policy ] 07:47:35 policy-drools-pdp | + set -a 07:47:35 policy-drools-pdp | + POLICY_HOME=/opt/app/policy 07:47:35 policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' 07:47:35 policy-drools-pdp | + '[' -d /opt/app/policy/bin ] 07:47:35 policy-drools-pdp | + PATH=/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 07:47:35 policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] 07:47:35 policy-drools-pdp | + PATH=/usr/lib/jvm/java-17-openjdk/bin:/opt/app/policy/bin:/usr/lib/jvm/default-jvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 07:47:35 policy-drools-pdp | + '[' -d /home/policy/bin ] 07:47:35 policy-drools-pdp | + set +a 07:47:35 policy-drools-pdp | + ls /tmp/policy-install/config/noop.pre.sh 07:47:35 policy-drools-pdp | + echo 'executing script: /tmp/policy-install/config/noop.pre.sh' 07:47:35 policy-drools-pdp | + source /tmp/policy-install/config/noop.pre.sh 07:47:35 policy-drools-pdp | executing script: /tmp/policy-install/config/noop.pre.sh 07:47:35 policy-drools-pdp | + chmod 644 /opt/app/policy/config/engine.properties /opt/app/policy/config/feature-lifecycle.properties 07:47:35 policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh 07:47:35 policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' 07:47:35 policy-drools-pdp | + '[' -z /opt/app/policy ] 07:47:35 policy-drools-pdp | + set -a 07:47:35 policy-drools-pdp | + POLICY_HOME=/opt/app/policy 07:47:35 policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' 07:47:35 policy-drools-pdp | + '[' -d /opt/app/policy/bin ] 07:47:35 policy-drools-pdp | + : 07:47:35 policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] 07:47:35 policy-drools-pdp | + : 07:47:35 policy-drools-pdp | + '[' -d /home/policy/bin ] 07:47:35 policy-drools-pdp | + set +a 07:47:35 policy-drools-pdp | + policy exec 07:47:35 policy-drools-pdp | + BIN_SCRIPT=bin/policy-management-controller 07:47:35 policy-drools-pdp | + OPERATION=none 07:47:35 policy-drools-pdp | -- /opt/app/policy/bin/policy exec -- 07:47:35 policy-drools-pdp | + '[' -z exec ] 07:47:35 policy-drools-pdp | + OPERATION=exec 07:47:35 policy-drools-pdp | + shift 07:47:35 policy-drools-pdp | + '[' -z ] 07:47:35 policy-drools-pdp | + '[' -z /opt/app/policy ] 07:47:35 policy-drools-pdp | + policy_exec 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- policy_exec --' 07:47:35 policy-drools-pdp | -- policy_exec -- 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + cd /opt/app/policy 07:47:35 policy-drools-pdp | + check_x_file bin/policy-management-controller 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- check_x_file --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + FILE=bin/policy-management-controller 07:47:35 policy-drools-pdp | -- check_x_file -- 07:47:35 policy-drools-pdp | + '[[' '!' -f bin/policy-management-controller '||' '!' -x bin/policy-management-controller ]] 07:47:35 policy-drools-pdp | + return 0 07:47:35 policy-drools-pdp | + bin/policy-management-controller exec 07:47:35 policy-drools-pdp | -- bin/policy-management-controller exec -- 07:47:35 policy-drools-pdp | + _DIR=/opt/app/policy 07:47:35 policy-drools-pdp | + _LOGS=/var/log/onap/policy/pdpd 07:47:35 policy-drools-pdp | + '[' -z /var/log/onap/policy/pdpd ] 07:47:35 policy-drools-pdp | + CONTROLLER=policy-management-controller 07:47:35 policy-drools-pdp | + RETVAL=0 07:47:35 policy-drools-pdp | + _PIDFILE=/opt/app/policy/PID 07:47:35 policy-drools-pdp | + exec_start 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- exec_start --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + status 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- status --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + '[' -f /opt/app/policy/PID ] 07:47:35 policy-drools-pdp | + '[' true ] 07:47:35 policy-drools-pdp | -- exec_start -- 07:47:35 policy-drools-pdp | -- status -- 07:47:35 policy-drools-pdp | + pidof -s java 07:47:35 policy-drools-pdp | + _PID= 07:47:35 policy-drools-pdp | + _STATUS='Policy Management (no pidfile) is NOT running' 07:47:35 policy-drools-pdp | + _RUNNING=0 07:47:35 policy-drools-pdp | + '[' 0 '=' 1 ] 07:47:35 policy-drools-pdp | + RETVAL=1 07:47:35 policy-drools-pdp | Policy Management (no pidfile) is NOT running 07:47:35 policy-drools-pdp | + echo 'Policy Management (no pidfile) is NOT running' 07:47:35 policy-drools-pdp | + '[' 0 '=' 1 ] 07:47:35 policy-drools-pdp | + preRunning 07:47:35 policy-drools-pdp | + '[' y '=' y ] 07:47:35 policy-drools-pdp | + echo '-- preRunning --' 07:47:35 policy-drools-pdp | + set -x 07:47:35 policy-drools-pdp | + mkdir -p /var/log/onap/policy/pdpd 07:47:35 policy-drools-pdp | -- preRunning -- 07:47:35 policy-drools-pdp | + ls /opt/app/policy/lib/accessors-smart-2.5.0.jar /opt/app/policy/lib/angus-activation-2.0.2.jar /opt/app/policy/lib/ant-1.10.14.jar /opt/app/policy/lib/ant-launcher-1.10.14.jar /opt/app/policy/lib/antlr-runtime-3.5.2.jar /opt/app/policy/lib/antlr4-runtime-4.13.0.jar /opt/app/policy/lib/aopalliance-1.0.jar /opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar /opt/app/policy/lib/asm-9.3.jar /opt/app/policy/lib/byte-buddy-1.15.11.jar /opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/checker-qual-3.48.3.jar /opt/app/policy/lib/classgraph-4.8.179.jar /opt/app/policy/lib/classmate-1.5.1.jar /opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/commons-beanutils-1.10.1.jar /opt/app/policy/lib/commons-cli-1.9.0.jar /opt/app/policy/lib/commons-codec-1.18.0.jar /opt/app/policy/lib/commons-collections-3.2.2.jar /opt/app/policy/lib/commons-collections4-4.5.0-M3.jar /opt/app/policy/lib/commons-configuration2-2.11.0.jar /opt/app/policy/lib/commons-digester-2.1.jar /opt/app/policy/lib/commons-io-2.18.0.jar /opt/app/policy/lib/commons-jexl3-3.2.1.jar /opt/app/policy/lib/commons-lang3-3.17.0.jar /opt/app/policy/lib/commons-logging-1.3.5.jar /opt/app/policy/lib/commons-net-3.11.1.jar /opt/app/policy/lib/commons-text-1.13.0.jar /opt/app/policy/lib/commons-validator-1.8.0.jar /opt/app/policy/lib/core-0.12.4.jar /opt/app/policy/lib/drools-base-8.40.1.Final.jar /opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar /opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar /opt/app/policy/lib/drools-commands-8.40.1.Final.jar /opt/app/policy/lib/drools-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-core-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar /opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-ecj-8.40.1.Final.jar /opt/app/policy/lib/drools-engine-8.40.1.Final.jar /opt/app/policy/lib/drools-io-8.40.1.Final.jar /opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar /opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar /opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar /opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar /opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar /opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar /opt/app/policy/lib/drools-tms-8.40.1.Final.jar /opt/app/policy/lib/drools-util-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar /opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar /opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar /opt/app/policy/lib/ecj-3.33.0.jar /opt/app/policy/lib/error_prone_annotations-2.36.0.jar /opt/app/policy/lib/failureaccess-1.0.3.jar /opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-2.12.1.jar /opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar /opt/app/policy/lib/guava-33.4.6-jre.jar /opt/app/policy/lib/guice-4.2.2-no_aop.jar /opt/app/policy/lib/handy-uri-templates-2.1.8.jar /opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar /opt/app/policy/lib/hibernate-core-6.6.16.Final.jar /opt/app/policy/lib/hk2-api-3.0.6.jar /opt/app/policy/lib/hk2-locator-3.0.6.jar /opt/app/policy/lib/hk2-utils-3.0.6.jar /opt/app/policy/lib/httpclient-4.5.13.jar /opt/app/policy/lib/httpcore-4.4.15.jar /opt/app/policy/lib/icu4j-74.2.jar /opt/app/policy/lib/istack-commons-runtime-4.1.2.jar /opt/app/policy/lib/j2objc-annotations-3.0.0.jar /opt/app/policy/lib/jackson-annotations-2.18.3.jar /opt/app/policy/lib/jackson-core-2.18.3.jar /opt/app/policy/lib/jackson-databind-2.18.3.jar /opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar /opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar /opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar /opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar /opt/app/policy/lib/jakarta.activation-api-2.1.3.jar /opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar /opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar /opt/app/policy/lib/jakarta.el-api-3.0.3.jar /opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar /opt/app/policy/lib/jakarta.inject-2.6.1.jar /opt/app/policy/lib/jakarta.inject-api-2.0.1.jar /opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar /opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar /opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar /opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar /opt/app/policy/lib/jakarta.validation-api-3.1.1.jar /opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar /opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar /opt/app/policy/lib/jandex-3.2.0.jar /opt/app/policy/lib/javaparser-core-3.24.2.jar /opt/app/policy/lib/javassist-3.30.2-GA.jar /opt/app/policy/lib/javax.inject-1.jar /opt/app/policy/lib/jaxb-core-4.0.5.jar /opt/app/policy/lib/jaxb-impl-4.0.5.jar /opt/app/policy/lib/jaxb-runtime-4.0.5.jar /opt/app/policy/lib/jaxb-xjc-4.0.5.jar /opt/app/policy/lib/jboss-logging-3.5.0.Final.jar /opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar /opt/app/policy/lib/jcodings-1.0.58.jar /opt/app/policy/lib/jersey-client-3.1.10.jar /opt/app/policy/lib/jersey-common-3.1.10.jar /opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar /opt/app/policy/lib/jersey-hk2-3.1.10.jar /opt/app/policy/lib/jersey-server-3.1.10.jar /opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar /opt/app/policy/lib/jetty-http-12.0.21.jar /opt/app/policy/lib/jetty-io-12.0.21.jar /opt/app/policy/lib/jetty-security-12.0.21.jar /opt/app/policy/lib/jetty-server-12.0.21.jar /opt/app/policy/lib/jetty-session-12.0.21.jar /opt/app/policy/lib/jetty-util-12.0.21.jar /opt/app/policy/lib/joda-time-2.10.2.jar /opt/app/policy/lib/joni-2.2.1.jar /opt/app/policy/lib/json-path-2.9.0.jar /opt/app/policy/lib/json-smart-2.5.0.jar /opt/app/policy/lib/jsoup-1.17.2.jar /opt/app/policy/lib/jspecify-1.0.0.jar /opt/app/policy/lib/kafka-clients-3.9.1.jar /opt/app/policy/lib/kie-api-8.40.1.Final.jar /opt/app/policy/lib/kie-ci-8.40.1.Final.jar /opt/app/policy/lib/kie-internal-8.40.1.Final.jar /opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar /opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar /opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar /opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar /opt/app/policy/lib/logback-classic-1.5.18.jar /opt/app/policy/lib/logback-core-1.5.18.jar /opt/app/policy/lib/lombok-1.18.38.jar /opt/app/policy/lib/lz4-java-1.8.0.jar /opt/app/policy/lib/maven-artifact-3.8.6.jar /opt/app/policy/lib/maven-builder-support-3.8.6.jar /opt/app/policy/lib/maven-compat-3.8.6.jar /opt/app/policy/lib/maven-core-3.8.6.jar /opt/app/policy/lib/maven-model-3.8.6.jar /opt/app/policy/lib/maven-model-builder-3.8.6.jar /opt/app/policy/lib/maven-plugin-api-3.8.6.jar /opt/app/policy/lib/maven-repository-metadata-3.8.6.jar /opt/app/policy/lib/maven-resolver-api-1.6.3.jar /opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar /opt/app/policy/lib/maven-resolver-impl-1.6.3.jar /opt/app/policy/lib/maven-resolver-provider-3.8.6.jar /opt/app/policy/lib/maven-resolver-spi-1.6.3.jar /opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar /opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar /opt/app/policy/lib/maven-resolver-util-1.6.3.jar /opt/app/policy/lib/maven-settings-3.8.6.jar /opt/app/policy/lib/maven-settings-builder-3.8.6.jar /opt/app/policy/lib/maven-shared-utils-3.3.4.jar /opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/mvel2-2.5.2.Final.jar /opt/app/policy/lib/mxparser-1.2.2.jar /opt/app/policy/lib/opentelemetry-api-1.43.0.jar /opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar /opt/app/policy/lib/opentelemetry-context-1.43.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar /opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar /opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar /opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar /opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar /opt/app/policy/lib/osgi-resource-locator-1.0.3.jar /opt/app/policy/lib/plexus-cipher-2.0.jar /opt/app/policy/lib/plexus-classworlds-2.6.0.jar /opt/app/policy/lib/plexus-component-annotations-2.1.0.jar /opt/app/policy/lib/plexus-interpolation-1.26.jar /opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar /opt/app/policy/lib/plexus-utils-3.6.0.jar /opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar /opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/postgresql-42.7.5.jar /opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar /opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar /opt/app/policy/lib/protobuf-java-3.22.0.jar /opt/app/policy/lib/re2j-1.8.jar /opt/app/policy/lib/slf4j-api-2.0.17.jar /opt/app/policy/lib/snakeyaml-2.4.jar /opt/app/policy/lib/snappy-java-1.1.10.5.jar /opt/app/policy/lib/swagger-annotations-2.2.29.jar /opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar /opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar /opt/app/policy/lib/txw2-4.0.5.jar /opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar /opt/app/policy/lib/wagon-http-3.5.1.jar /opt/app/policy/lib/wagon-http-shared-3.5.1.jar /opt/app/policy/lib/wagon-provider-api-3.5.1.jar /opt/app/policy/lib/xmlpull-1.1.3.1.jar /opt/app/policy/lib/xstream-1.4.20.jar /opt/app/policy/lib/zstd-jni-1.5.6-4.jar 07:47:35 policy-drools-pdp | + xargs -I X printf ':%s' X 07:47:35 policy-drools-pdp | + CP=:/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar 07:47:35 policy-drools-pdp | + source /opt/app/policy/etc/profile.d/env.sh 07:47:35 policy-drools-pdp | + templateRegex='^\$\{\{POLICY_HOME}}$' 07:47:35 policy-drools-pdp | + '[' -z /opt/app/policy ] 07:47:35 policy-drools-pdp | + set -a 07:47:35 policy-drools-pdp | + POLICY_HOME=/opt/app/policy 07:47:35 policy-drools-pdp | + ls '/opt/app/policy/etc/profile.d/*.conf' 07:47:35 policy-drools-pdp | + '[' -d /opt/app/policy/bin ] 07:47:35 policy-drools-pdp | + : 07:47:35 policy-drools-pdp | + '[' -d /usr/lib/jvm/java-17-openjdk/bin ] 07:47:35 policy-drools-pdp | + : 07:47:35 policy-drools-pdp | + '[' -d /home/policy/bin ] 07:47:35 policy-drools-pdp | + set +a 07:47:35 policy-drools-pdp | + /opt/app/policy/bin/configure-maven 07:47:35 policy-drools-pdp | + export 'M2_HOME=/home/policy/.m2' 07:47:35 policy-drools-pdp | + mkdir -p /home/policy/.m2 07:47:35 policy-drools-pdp | + '[' -z http://nexus:8081/nexus/content/repositories/snapshots/ ] 07:47:35 policy-drools-pdp | + ln -s -f /opt/app/policy/etc/m2/settings.xml /home/policy/.m2/settings.xml 07:47:35 policy-drools-pdp | + '[' -f /opt/app/policy/config/system.properties ] 07:47:35 policy-drools-pdp | + sed -n -e 's/^[ \t]*\([^ \t#]*\)[ \t]*=[ \t]*\(.*\)$/-D\1=\2/p' /opt/app/policy/config/system.properties 07:47:35 policy-drools-pdp | + systemProperties='-Dlogback.configurationFile=config/logback.xml' 07:47:35 policy-drools-pdp | + cd /opt/app/policy 07:47:35 policy-drools-pdp | + exec /usr/lib/jvm/java-17-openjdk/bin/java -server -Xms512m -Xmx512m -cp /opt/app/policy/config:/opt/app/policy/lib::/opt/app/policy/lib/accessors-smart-2.5.0.jar:/opt/app/policy/lib/angus-activation-2.0.2.jar:/opt/app/policy/lib/ant-1.10.14.jar:/opt/app/policy/lib/ant-launcher-1.10.14.jar:/opt/app/policy/lib/antlr-runtime-3.5.2.jar:/opt/app/policy/lib/antlr4-runtime-4.13.0.jar:/opt/app/policy/lib/aopalliance-1.0.jar:/opt/app/policy/lib/aopalliance-repackaged-3.0.6.jar:/opt/app/policy/lib/asm-9.3.jar:/opt/app/policy/lib/byte-buddy-1.15.11.jar:/opt/app/policy/lib/capabilities-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/checker-qual-3.48.3.jar:/opt/app/policy/lib/classgraph-4.8.179.jar:/opt/app/policy/lib/classmate-1.5.1.jar:/opt/app/policy/lib/common-parameters-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/commons-beanutils-1.10.1.jar:/opt/app/policy/lib/commons-cli-1.9.0.jar:/opt/app/policy/lib/commons-codec-1.18.0.jar:/opt/app/policy/lib/commons-collections-3.2.2.jar:/opt/app/policy/lib/commons-collections4-4.5.0-M3.jar:/opt/app/policy/lib/commons-configuration2-2.11.0.jar:/opt/app/policy/lib/commons-digester-2.1.jar:/opt/app/policy/lib/commons-io-2.18.0.jar:/opt/app/policy/lib/commons-jexl3-3.2.1.jar:/opt/app/policy/lib/commons-lang3-3.17.0.jar:/opt/app/policy/lib/commons-logging-1.3.5.jar:/opt/app/policy/lib/commons-net-3.11.1.jar:/opt/app/policy/lib/commons-text-1.13.0.jar:/opt/app/policy/lib/commons-validator-1.8.0.jar:/opt/app/policy/lib/core-0.12.4.jar:/opt/app/policy/lib/drools-base-8.40.1.Final.jar:/opt/app/policy/lib/drools-canonical-model-8.40.1.Final.jar:/opt/app/policy/lib/drools-codegen-common-8.40.1.Final.jar:/opt/app/policy/lib/drools-commands-8.40.1.Final.jar:/opt/app/policy/lib/drools-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-core-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-ast-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-extensions-8.40.1.Final.jar:/opt/app/policy/lib/drools-drl-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-ecj-8.40.1.Final.jar:/opt/app/policy/lib/drools-engine-8.40.1.Final.jar:/opt/app/policy/lib/drools-io-8.40.1.Final.jar:/opt/app/policy/lib/drools-kiesession-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-codegen-8.40.1.Final.jar:/opt/app/policy/lib/drools-model-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-compiler-8.40.1.Final.jar:/opt/app/policy/lib/drools-mvel-parser-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-persistence-jpa-8.40.1.Final.jar:/opt/app/policy/lib/drools-serialization-protobuf-8.40.1.Final.jar:/opt/app/policy/lib/drools-tms-8.40.1.Final.jar:/opt/app/policy/lib/drools-util-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-api-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-dynamic-8.40.1.Final.jar:/opt/app/policy/lib/drools-wiring-static-8.40.1.Final.jar:/opt/app/policy/lib/drools-xml-support-8.40.1.Final.jar:/opt/app/policy/lib/ecj-3.33.0.jar:/opt/app/policy/lib/error_prone_annotations-2.36.0.jar:/opt/app/policy/lib/failureaccess-1.0.3.jar:/opt/app/policy/lib/feature-lifecycle-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-2.12.1.jar:/opt/app/policy/lib/gson-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/gson-javatime-serialisers-1.1.2.jar:/opt/app/policy/lib/guava-33.4.6-jre.jar:/opt/app/policy/lib/guice-4.2.2-no_aop.jar:/opt/app/policy/lib/handy-uri-templates-2.1.8.jar:/opt/app/policy/lib/hibernate-commons-annotations-7.0.3.Final.jar:/opt/app/policy/lib/hibernate-core-6.6.16.Final.jar:/opt/app/policy/lib/hk2-api-3.0.6.jar:/opt/app/policy/lib/hk2-locator-3.0.6.jar:/opt/app/policy/lib/hk2-utils-3.0.6.jar:/opt/app/policy/lib/httpclient-4.5.13.jar:/opt/app/policy/lib/httpcore-4.4.15.jar:/opt/app/policy/lib/icu4j-74.2.jar:/opt/app/policy/lib/istack-commons-runtime-4.1.2.jar:/opt/app/policy/lib/j2objc-annotations-3.0.0.jar:/opt/app/policy/lib/jackson-annotations-2.18.3.jar:/opt/app/policy/lib/jackson-core-2.18.3.jar:/opt/app/policy/lib/jackson-databind-2.18.3.jar:/opt/app/policy/lib/jackson-dataformat-yaml-2.18.3.jar:/opt/app/policy/lib/jackson-datatype-jsr310-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-base-2.18.3.jar:/opt/app/policy/lib/jackson-jakarta-rs-json-provider-2.18.3.jar:/opt/app/policy/lib/jackson-module-jakarta-xmlbind-annotations-2.18.3.jar:/opt/app/policy/lib/jakarta.activation-api-2.1.3.jar:/opt/app/policy/lib/jakarta.annotation-api-3.0.0.jar:/opt/app/policy/lib/jakarta.ejb-api-3.2.6.jar:/opt/app/policy/lib/jakarta.el-api-3.0.3.jar:/opt/app/policy/lib/jakarta.enterprise.cdi-api-2.0.2.jar:/opt/app/policy/lib/jakarta.inject-2.6.1.jar:/opt/app/policy/lib/jakarta.inject-api-2.0.1.jar:/opt/app/policy/lib/jakarta.interceptor-api-1.2.5.jar:/opt/app/policy/lib/jakarta.persistence-api-3.1.0.jar:/opt/app/policy/lib/jakarta.servlet-api-6.1.0.jar:/opt/app/policy/lib/jakarta.transaction-api-2.0.1.jar:/opt/app/policy/lib/jakarta.validation-api-3.1.1.jar:/opt/app/policy/lib/jakarta.ws.rs-api-4.0.0.jar:/opt/app/policy/lib/jakarta.xml.bind-api-4.0.2.jar:/opt/app/policy/lib/jandex-3.2.0.jar:/opt/app/policy/lib/javaparser-core-3.24.2.jar:/opt/app/policy/lib/javassist-3.30.2-GA.jar:/opt/app/policy/lib/javax.inject-1.jar:/opt/app/policy/lib/jaxb-core-4.0.5.jar:/opt/app/policy/lib/jaxb-impl-4.0.5.jar:/opt/app/policy/lib/jaxb-runtime-4.0.5.jar:/opt/app/policy/lib/jaxb-xjc-4.0.5.jar:/opt/app/policy/lib/jboss-logging-3.5.0.Final.jar:/opt/app/policy/lib/jcl-over-slf4j-2.0.17.jar:/opt/app/policy/lib/jcodings-1.0.58.jar:/opt/app/policy/lib/jersey-client-3.1.10.jar:/opt/app/policy/lib/jersey-common-3.1.10.jar:/opt/app/policy/lib/jersey-container-servlet-core-3.1.10.jar:/opt/app/policy/lib/jersey-hk2-3.1.10.jar:/opt/app/policy/lib/jersey-server-3.1.10.jar:/opt/app/policy/lib/jetty-ee10-servlet-12.0.21.jar:/opt/app/policy/lib/jetty-http-12.0.21.jar:/opt/app/policy/lib/jetty-io-12.0.21.jar:/opt/app/policy/lib/jetty-security-12.0.21.jar:/opt/app/policy/lib/jetty-server-12.0.21.jar:/opt/app/policy/lib/jetty-session-12.0.21.jar:/opt/app/policy/lib/jetty-util-12.0.21.jar:/opt/app/policy/lib/joda-time-2.10.2.jar:/opt/app/policy/lib/joni-2.2.1.jar:/opt/app/policy/lib/json-path-2.9.0.jar:/opt/app/policy/lib/json-smart-2.5.0.jar:/opt/app/policy/lib/jsoup-1.17.2.jar:/opt/app/policy/lib/jspecify-1.0.0.jar:/opt/app/policy/lib/kafka-clients-3.9.1.jar:/opt/app/policy/lib/kie-api-8.40.1.Final.jar:/opt/app/policy/lib/kie-ci-8.40.1.Final.jar:/opt/app/policy/lib/kie-internal-8.40.1.Final.jar:/opt/app/policy/lib/kie-memory-compiler-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-integration-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-maven-support-8.40.1.Final.jar:/opt/app/policy/lib/kie-util-xml-8.40.1.Final.jar:/opt/app/policy/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/app/policy/lib/logback-classic-1.5.18.jar:/opt/app/policy/lib/logback-core-1.5.18.jar:/opt/app/policy/lib/lombok-1.18.38.jar:/opt/app/policy/lib/lz4-java-1.8.0.jar:/opt/app/policy/lib/maven-artifact-3.8.6.jar:/opt/app/policy/lib/maven-builder-support-3.8.6.jar:/opt/app/policy/lib/maven-compat-3.8.6.jar:/opt/app/policy/lib/maven-core-3.8.6.jar:/opt/app/policy/lib/maven-model-3.8.6.jar:/opt/app/policy/lib/maven-model-builder-3.8.6.jar:/opt/app/policy/lib/maven-plugin-api-3.8.6.jar:/opt/app/policy/lib/maven-repository-metadata-3.8.6.jar:/opt/app/policy/lib/maven-resolver-api-1.6.3.jar:/opt/app/policy/lib/maven-resolver-connector-basic-1.7.3.jar:/opt/app/policy/lib/maven-resolver-impl-1.6.3.jar:/opt/app/policy/lib/maven-resolver-provider-3.8.6.jar:/opt/app/policy/lib/maven-resolver-spi-1.6.3.jar:/opt/app/policy/lib/maven-resolver-transport-file-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-http-1.7.3.jar:/opt/app/policy/lib/maven-resolver-transport-wagon-1.7.3.jar:/opt/app/policy/lib/maven-resolver-util-1.6.3.jar:/opt/app/policy/lib/maven-settings-3.8.6.jar:/opt/app/policy/lib/maven-settings-builder-3.8.6.jar:/opt/app/policy/lib/maven-shared-utils-3.3.4.jar:/opt/app/policy/lib/message-bus-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/mvel2-2.5.2.Final.jar:/opt/app/policy/lib/mxparser-1.2.2.jar:/opt/app/policy/lib/opentelemetry-api-1.43.0.jar:/opt/app/policy/lib/opentelemetry-api-incubator-1.41.0-alpha.jar:/opt/app/policy/lib/opentelemetry-context-1.43.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-2.7.0.jar:/opt/app/policy/lib/opentelemetry-instrumentation-api-incubator-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-2.6-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-kafka-clients-common-2.7.0-alpha.jar:/opt/app/policy/lib/opentelemetry-semconv-1.25.0-alpha.jar:/opt/app/policy/lib/org.eclipse.sisu.inject-0.3.5.jar:/opt/app/policy/lib/org.eclipse.sisu.plexus-0.3.5.jar:/opt/app/policy/lib/osgi-resource-locator-1.0.3.jar:/opt/app/policy/lib/plexus-cipher-2.0.jar:/opt/app/policy/lib/plexus-classworlds-2.6.0.jar:/opt/app/policy/lib/plexus-component-annotations-2.1.0.jar:/opt/app/policy/lib/plexus-interpolation-1.26.jar:/opt/app/policy/lib/plexus-sec-dispatcher-2.0.jar:/opt/app/policy/lib/plexus-utils-3.6.0.jar:/opt/app/policy/lib/policy-core-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-domains-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-endpoints-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-management-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-base-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-dao-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-errors-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-examples-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-pdp-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-models-tosca-4.2.1-SNAPSHOT.jar:/opt/app/policy/lib/policy-utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/postgresql-42.7.5.jar:/opt/app/policy/lib/prometheus-metrics-config-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-core-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exporter-servlet-jakarta-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-formats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-exposition-textformats-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-instrumentation-jvm-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-model-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-common-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-initializer-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-1.3.6.jar:/opt/app/policy/lib/prometheus-metrics-tracer-otel-agent-1.3.6.jar:/opt/app/policy/lib/protobuf-java-3.22.0.jar:/opt/app/policy/lib/re2j-1.8.jar:/opt/app/policy/lib/slf4j-api-2.0.17.jar:/opt/app/policy/lib/snakeyaml-2.4.jar:/opt/app/policy/lib/snappy-java-1.1.10.5.jar:/opt/app/policy/lib/swagger-annotations-2.2.29.jar:/opt/app/policy/lib/swagger-annotations-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-core-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-integration-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-jaxrs2-servlet-initializer-v2-jakarta-2.2.29.jar:/opt/app/policy/lib/swagger-models-jakarta-2.2.29.jar:/opt/app/policy/lib/txw2-4.0.5.jar:/opt/app/policy/lib/utils-3.2.1-SNAPSHOT.jar:/opt/app/policy/lib/wagon-http-3.5.1.jar:/opt/app/policy/lib/wagon-http-shared-3.5.1.jar:/opt/app/policy/lib/wagon-provider-api-3.5.1.jar:/opt/app/policy/lib/xmlpull-1.1.3.1.jar:/opt/app/policy/lib/xstream-1.4.20.jar:/opt/app/policy/lib/zstd-jni-1.5.6-4.jar '-Dlogback.configurationFile=config/logback.xml' org.onap.policy.drools.system.Main 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.391+00:00|INFO|LifecycleFsm|main] The mandatory Policy Types are []. Compliance is true 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.393+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: 07:47:35 policy-drools-pdp | [org.onap.policy.drools.lifecycle.LifecycleFeature@2235eaab] 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.400+00:00|INFO|PolicyContainer|main] PolicyContainer.main: configDir=config 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.401+00:00|INFO|OrderedServiceImpl|main] ***** OrderedServiceImpl implementers: 07:47:35 policy-drools-pdp | [] 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.407+00:00|INFO|IndexedKafkaTopicSourceFactory|main] IndexedKafkaTopicSourceFactory []: no topic for KAFKA Source 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.409+00:00|INFO|IndexedKafkaTopicSinkFactory|main] IndexedKafkaTopicSinkFactory []: no topic for KAFKA Sink 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.685+00:00|INFO|PolicyEngineManager|main] lock manager is org.onap.policy.drools.system.internal.SimpleLockManager@376a312c 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.693+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: WAITED-START 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.707+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.708+00:00|INFO|JettyServletServer|CONFIG-9696] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@7383eae2{STOPPED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.715+00:00|INFO|Server|CONFIG-9696] jetty-12.0.21; built: 2025-05-09T00:32:00.688Z; git: 1c4719601e31b05b7d68910d2edd980259f1f53c; jvm 17.0.15+6-alpine-r0 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.738+00:00|INFO|DefaultSessionIdManager|CONFIG-9696] Session workerName=node0 07:47:35 policy-drools-pdp | [2025-06-16T07:46:20.747+00:00|INFO|ContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.DefaultApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.InputsApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.PropertiesApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwitchesApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LifecycleApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.FeaturesApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ControllersApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.ToolsApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.EnvironmentApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.LegacyApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.TopicsApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | Jun 16, 2025 7:46:21 AM org.glassfish.jersey.server.ResourceModelConfigurator bindProvidersAndResources 07:47:35 policy-drools-pdp | WARNING: Component of class interface org.onap.policy.drools.server.restful.SwaggerApi cannot be instantiated and will be ignored. 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.491+00:00|INFO|GsonMessageBodyHandler|CONFIG-9696] Using GSON for REST calls 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.492+00:00|INFO|JacksonHandler|CONFIG-9696] Using GSON with Jackson behaviors for REST calls 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.494+00:00|INFO|YamlMessageBodyHandler|CONFIG-9696] Accepting YAML for REST calls 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.649+00:00|INFO|ServletContextHandler|CONFIG-9696] Started oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}} 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.657+00:00|INFO|AbstractConnector|CONFIG-9696] Started CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696} 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.658+00:00|INFO|Server|CONFIG-9696] Started oejs.Server@3276732{STARTING}[12.0.21,sto=0] @2232ms 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.658+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}}, swaggerId=swagger-9696, toString()=JettyServletServer(name=CONFIG, host=0.0.0.0, port=9696, sniHostCheck=false, user=demo@people.osaaf.org, password=demo123456!, contextPath=/, jettyServer=oejs.Server@3276732{STARTED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@5be067de{ROOT,/,b=null,a=AVAILABLE,h=oeje10s.SessionHandler@7383eae2{STARTED}}, connector=CONFIG@18245eb0{HTTP/1.1, (http/1.1)}{0.0.0.0:9696}, jettyThread=Thread[CONFIG-9696,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-35d08e6c==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@2f5a899e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STARTED}, /*=org.glassfish.jersey.servlet.ServletContainer-626c44e7==org.glassfish.jersey.servlet.ServletContainer@7922d1a9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:,STARTED}})]: pending time is 9050 ms. 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.668+00:00|INFO|LifecycleFsm|main] lifecycle event: start engine 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.797+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-drools-pdp | allow.auto.create.topics = true 07:47:35 policy-drools-pdp | auto.commit.interval.ms = 5000 07:47:35 policy-drools-pdp | auto.include.jmx.reporter = true 07:47:35 policy-drools-pdp | auto.offset.reset = latest 07:47:35 policy-drools-pdp | bootstrap.servers = [kafka:9092] 07:47:35 policy-drools-pdp | check.crcs = true 07:47:35 policy-drools-pdp | client.dns.lookup = use_all_dns_ips 07:47:35 policy-drools-pdp | client.id = consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-1 07:47:35 policy-drools-pdp | client.rack = 07:47:35 policy-drools-pdp | connections.max.idle.ms = 540000 07:47:35 policy-drools-pdp | default.api.timeout.ms = 60000 07:47:35 policy-drools-pdp | enable.auto.commit = true 07:47:35 policy-drools-pdp | enable.metrics.push = true 07:47:35 policy-drools-pdp | exclude.internal.topics = true 07:47:35 policy-drools-pdp | fetch.max.bytes = 52428800 07:47:35 policy-drools-pdp | fetch.max.wait.ms = 500 07:47:35 policy-drools-pdp | fetch.min.bytes = 1 07:47:35 policy-drools-pdp | group.id = 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a 07:47:35 policy-drools-pdp | group.instance.id = null 07:47:35 policy-drools-pdp | group.protocol = classic 07:47:35 policy-drools-pdp | group.remote.assignor = null 07:47:35 policy-drools-pdp | heartbeat.interval.ms = 3000 07:47:35 policy-drools-pdp | interceptor.classes = [] 07:47:35 policy-drools-pdp | internal.leave.group.on.close = true 07:47:35 policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-drools-pdp | isolation.level = read_uncommitted 07:47:35 policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-drools-pdp | max.partition.fetch.bytes = 1048576 07:47:35 policy-drools-pdp | max.poll.interval.ms = 300000 07:47:35 policy-drools-pdp | max.poll.records = 500 07:47:35 policy-drools-pdp | metadata.max.age.ms = 300000 07:47:35 policy-drools-pdp | metadata.recovery.strategy = none 07:47:35 policy-drools-pdp | metric.reporters = [] 07:47:35 policy-drools-pdp | metrics.num.samples = 2 07:47:35 policy-drools-pdp | metrics.recording.level = INFO 07:47:35 policy-drools-pdp | metrics.sample.window.ms = 30000 07:47:35 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-drools-pdp | receive.buffer.bytes = 65536 07:47:35 policy-drools-pdp | reconnect.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | reconnect.backoff.ms = 50 07:47:35 policy-drools-pdp | request.timeout.ms = 30000 07:47:35 policy-drools-pdp | retry.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.client.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.jaas.config = null 07:47:35 policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-drools-pdp | sasl.kerberos.service.name = null 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.login.class = null 07:47:35 policy-drools-pdp | sasl.login.connect.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.read.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.mechanism = GSSAPI 07:47:35 policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.audience = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-drools-pdp | security.protocol = PLAINTEXT 07:47:35 policy-drools-pdp | security.providers = null 07:47:35 policy-drools-pdp | send.buffer.bytes = 131072 07:47:35 policy-drools-pdp | session.timeout.ms = 45000 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-drools-pdp | ssl.cipher.suites = null 07:47:35 policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-drools-pdp | ssl.endpoint.identification.algorithm = https 07:47:35 policy-drools-pdp | ssl.engine.factory.class = null 07:47:35 policy-drools-pdp | ssl.key.password = null 07:47:35 policy-drools-pdp | ssl.keymanager.algorithm = SunX509 07:47:35 policy-drools-pdp | ssl.keystore.certificate.chain = null 07:47:35 policy-drools-pdp | ssl.keystore.key = null 07:47:35 policy-drools-pdp | ssl.keystore.location = null 07:47:35 policy-drools-pdp | ssl.keystore.password = null 07:47:35 policy-drools-pdp | ssl.keystore.type = JKS 07:47:35 policy-drools-pdp | ssl.protocol = TLSv1.3 07:47:35 policy-drools-pdp | ssl.provider = null 07:47:35 policy-drools-pdp | ssl.secure.random.implementation = null 07:47:35 policy-drools-pdp | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-drools-pdp | ssl.truststore.certificates = null 07:47:35 policy-drools-pdp | ssl.truststore.location = null 07:47:35 policy-drools-pdp | ssl.truststore.password = null 07:47:35 policy-drools-pdp | ssl.truststore.type = JKS 07:47:35 policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-drools-pdp | 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.833+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.900+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.900+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.900+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059981899 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.902+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-1, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.902+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e6308a9 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.916+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.917+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-drools-pdp | allow.auto.create.topics = true 07:47:35 policy-drools-pdp | auto.commit.interval.ms = 5000 07:47:35 policy-drools-pdp | auto.include.jmx.reporter = true 07:47:35 policy-drools-pdp | auto.offset.reset = latest 07:47:35 policy-drools-pdp | bootstrap.servers = [kafka:9092] 07:47:35 policy-drools-pdp | check.crcs = true 07:47:35 policy-drools-pdp | client.dns.lookup = use_all_dns_ips 07:47:35 policy-drools-pdp | client.id = consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2 07:47:35 policy-drools-pdp | client.rack = 07:47:35 policy-drools-pdp | connections.max.idle.ms = 540000 07:47:35 policy-drools-pdp | default.api.timeout.ms = 60000 07:47:35 policy-drools-pdp | enable.auto.commit = true 07:47:35 policy-drools-pdp | enable.metrics.push = true 07:47:35 policy-drools-pdp | exclude.internal.topics = true 07:47:35 policy-drools-pdp | fetch.max.bytes = 52428800 07:47:35 policy-drools-pdp | fetch.max.wait.ms = 500 07:47:35 policy-drools-pdp | fetch.min.bytes = 1 07:47:35 policy-drools-pdp | group.id = 6f5d72fa-7951-492f-89a0-8ea9d3e34f6a 07:47:35 policy-drools-pdp | group.instance.id = null 07:47:35 policy-drools-pdp | group.protocol = classic 07:47:35 policy-drools-pdp | group.remote.assignor = null 07:47:35 policy-drools-pdp | heartbeat.interval.ms = 3000 07:47:35 policy-drools-pdp | interceptor.classes = [] 07:47:35 policy-drools-pdp | internal.leave.group.on.close = true 07:47:35 policy-drools-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-drools-pdp | isolation.level = read_uncommitted 07:47:35 policy-drools-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-drools-pdp | max.partition.fetch.bytes = 1048576 07:47:35 policy-drools-pdp | max.poll.interval.ms = 300000 07:47:35 policy-drools-pdp | max.poll.records = 500 07:47:35 policy-drools-pdp | metadata.max.age.ms = 300000 07:47:35 policy-drools-pdp | metadata.recovery.strategy = none 07:47:35 policy-drools-pdp | metric.reporters = [] 07:47:35 policy-drools-pdp | metrics.num.samples = 2 07:47:35 policy-drools-pdp | metrics.recording.level = INFO 07:47:35 policy-drools-pdp | metrics.sample.window.ms = 30000 07:47:35 policy-drools-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-drools-pdp | receive.buffer.bytes = 65536 07:47:35 policy-drools-pdp | reconnect.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | reconnect.backoff.ms = 50 07:47:35 policy-drools-pdp | request.timeout.ms = 30000 07:47:35 policy-drools-pdp | retry.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.client.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.jaas.config = null 07:47:35 policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-drools-pdp | sasl.kerberos.service.name = null 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.login.class = null 07:47:35 policy-drools-pdp | sasl.login.connect.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.read.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.mechanism = GSSAPI 07:47:35 policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.audience = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-drools-pdp | security.protocol = PLAINTEXT 07:47:35 policy-drools-pdp | security.providers = null 07:47:35 policy-drools-pdp | send.buffer.bytes = 131072 07:47:35 policy-drools-pdp | session.timeout.ms = 45000 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-drools-pdp | ssl.cipher.suites = null 07:47:35 policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-drools-pdp | ssl.endpoint.identification.algorithm = https 07:47:35 policy-drools-pdp | ssl.engine.factory.class = null 07:47:35 policy-drools-pdp | ssl.key.password = null 07:47:35 policy-drools-pdp | ssl.keymanager.algorithm = SunX509 07:47:35 policy-drools-pdp | ssl.keystore.certificate.chain = null 07:47:35 policy-drools-pdp | ssl.keystore.key = null 07:47:35 policy-drools-pdp | ssl.keystore.location = null 07:47:35 policy-drools-pdp | ssl.keystore.password = null 07:47:35 policy-drools-pdp | ssl.keystore.type = JKS 07:47:35 policy-drools-pdp | ssl.protocol = TLSv1.3 07:47:35 policy-drools-pdp | ssl.provider = null 07:47:35 policy-drools-pdp | ssl.secure.random.implementation = null 07:47:35 policy-drools-pdp | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-drools-pdp | ssl.truststore.certificates = null 07:47:35 policy-drools-pdp | ssl.truststore.location = null 07:47:35 policy-drools-pdp | ssl.truststore.password = null 07:47:35 policy-drools-pdp | ssl.truststore.type = JKS 07:47:35 policy-drools-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-drools-pdp | 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.918+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.927+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.928+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.928+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059981927 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.928+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.929+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a, consumerInstance=policy-drools-pdp, fetchTimeout=15000, fetchLimit=100, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.932+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4482fc5f-d631-4648-b3a1-3ff59bf92838, alive=false, publisher=null]]: starting 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.943+00:00|INFO|ProducerConfig|main] ProducerConfig values: 07:47:35 policy-drools-pdp | acks = -1 07:47:35 policy-drools-pdp | auto.include.jmx.reporter = true 07:47:35 policy-drools-pdp | batch.size = 16384 07:47:35 policy-drools-pdp | bootstrap.servers = [kafka:9092] 07:47:35 policy-drools-pdp | buffer.memory = 33554432 07:47:35 policy-drools-pdp | client.dns.lookup = use_all_dns_ips 07:47:35 policy-drools-pdp | client.id = producer-1 07:47:35 policy-drools-pdp | compression.gzip.level = -1 07:47:35 policy-drools-pdp | compression.lz4.level = 9 07:47:35 policy-drools-pdp | compression.type = none 07:47:35 policy-drools-pdp | compression.zstd.level = 3 07:47:35 policy-drools-pdp | connections.max.idle.ms = 540000 07:47:35 policy-drools-pdp | delivery.timeout.ms = 120000 07:47:35 policy-drools-pdp | enable.idempotence = true 07:47:35 policy-drools-pdp | enable.metrics.push = true 07:47:35 policy-drools-pdp | interceptor.classes = [] 07:47:35 policy-drools-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-drools-pdp | linger.ms = 0 07:47:35 policy-drools-pdp | max.block.ms = 60000 07:47:35 policy-drools-pdp | max.in.flight.requests.per.connection = 5 07:47:35 policy-drools-pdp | max.request.size = 1048576 07:47:35 policy-drools-pdp | metadata.max.age.ms = 300000 07:47:35 policy-drools-pdp | metadata.max.idle.ms = 300000 07:47:35 policy-drools-pdp | metadata.recovery.strategy = none 07:47:35 policy-drools-pdp | metric.reporters = [] 07:47:35 policy-drools-pdp | metrics.num.samples = 2 07:47:35 policy-drools-pdp | metrics.recording.level = INFO 07:47:35 policy-drools-pdp | metrics.sample.window.ms = 30000 07:47:35 policy-drools-pdp | partitioner.adaptive.partitioning.enable = true 07:47:35 policy-drools-pdp | partitioner.availability.timeout.ms = 0 07:47:35 policy-drools-pdp | partitioner.class = null 07:47:35 policy-drools-pdp | partitioner.ignore.keys = false 07:47:35 policy-drools-pdp | receive.buffer.bytes = 32768 07:47:35 policy-drools-pdp | reconnect.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | reconnect.backoff.ms = 50 07:47:35 policy-drools-pdp | request.timeout.ms = 30000 07:47:35 policy-drools-pdp | retries = 2147483647 07:47:35 policy-drools-pdp | retry.backoff.max.ms = 1000 07:47:35 policy-drools-pdp | retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.client.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.jaas.config = null 07:47:35 policy-drools-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-drools-pdp | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-drools-pdp | sasl.kerberos.service.name = null 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.callback.handler.class = null 07:47:35 policy-drools-pdp | sasl.login.class = null 07:47:35 policy-drools-pdp | sasl.login.connect.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.read.timeout.ms = null 07:47:35 policy-drools-pdp | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-drools-pdp | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-drools-pdp | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-drools-pdp | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.login.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.mechanism = GSSAPI 07:47:35 policy-drools-pdp | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.audience = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-drools-pdp | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-drools-pdp | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-drools-pdp | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-drools-pdp | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-drools-pdp | security.protocol = PLAINTEXT 07:47:35 policy-drools-pdp | security.providers = null 07:47:35 policy-drools-pdp | send.buffer.bytes = 131072 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-drools-pdp | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-drools-pdp | ssl.cipher.suites = null 07:47:35 policy-drools-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-drools-pdp | ssl.endpoint.identification.algorithm = https 07:47:35 policy-drools-pdp | ssl.engine.factory.class = null 07:47:35 policy-drools-pdp | ssl.key.password = null 07:47:35 policy-drools-pdp | ssl.keymanager.algorithm = SunX509 07:47:35 policy-drools-pdp | ssl.keystore.certificate.chain = null 07:47:35 policy-drools-pdp | ssl.keystore.key = null 07:47:35 policy-drools-pdp | ssl.keystore.location = null 07:47:35 policy-drools-pdp | ssl.keystore.password = null 07:47:35 policy-drools-pdp | ssl.keystore.type = JKS 07:47:35 policy-drools-pdp | ssl.protocol = TLSv1.3 07:47:35 policy-drools-pdp | ssl.provider = null 07:47:35 policy-drools-pdp | ssl.secure.random.implementation = null 07:47:35 policy-drools-pdp | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-drools-pdp | ssl.truststore.certificates = null 07:47:35 policy-drools-pdp | ssl.truststore.location = null 07:47:35 policy-drools-pdp | ssl.truststore.password = null 07:47:35 policy-drools-pdp | ssl.truststore.type = JKS 07:47:35 policy-drools-pdp | transaction.timeout.ms = 60000 07:47:35 policy-drools-pdp | transactional.id = null 07:47:35 policy-drools-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-drools-pdp | 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.944+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.952+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.969+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.969+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.969+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059981969 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.970+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4482fc5f-d631-4648-b3a1-3ff59bf92838, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.971+00:00|INFO|LifecycleStateDefault|main] LifecycleStateTerminated(): state-change from TERMINATED to PASSIVE 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.972+00:00|INFO|LifecycleFsm|pool-2-thread-1] lifecycle event: status 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.972+00:00|INFO|MdcTransactionImpl|main] 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.975+00:00|INFO|Main|main] Started policy-drools-pdp service successfully. 07:47:35 policy-drools-pdp | [2025-06-16T07:46:21.988+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: 07:47:35 policy-drools-pdp | [] 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.276+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.276+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.277+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.277+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.284+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] (Re-)joining group 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.298+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Request joining group due to: need to re-join with the given member-id: consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b 07:47:35 policy-drools-pdp | [2025-06-16T07:46:22.298+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] (Re-)joining group 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.308+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b', protocol='range'} 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.322+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Finished assignment for group at generation 1: {consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b=Assignment(partitions=[policy-pdp-pap-0])} 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2-527d31e1-6c5f-412a-9475-639096ef998b', protocol='range'} 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.332+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.334+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Adding newly assigned partitions: policy-pdp-pap-0 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.342+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Found no committed offset for partition policy-pdp-pap-0 07:47:35 policy-drools-pdp | [2025-06-16T07:46:25.356+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f5d72fa-7951-492f-89a0-8ea9d3e34f6a-2, groupId=6f5d72fa-7951-492f-89a0-8ea9d3e34f6a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 07:47:35 policy-pap | Waiting for api port 6969... 07:47:35 policy-pap | api (172.17.0.7:6969) open 07:47:35 policy-pap | Waiting for kafka port 9092... 07:47:35 policy-pap | kafka (172.17.0.8:9092) open 07:47:35 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 07:47:35 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 07:47:35 policy-pap | 07:47:35 policy-pap | . ____ _ __ _ _ 07:47:35 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 07:47:35 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 07:47:35 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 07:47:35 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 07:47:35 policy-pap | =========|_|==============|___/=/_/_/_/ 07:47:35 policy-pap | 07:47:35 policy-pap | :: Spring Boot :: (v3.4.6) 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:10.118+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 57 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 07:47:35 policy-pap | [2025-06-16T07:46:10.120+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 07:47:35 policy-pap | [2025-06-16T07:46:11.492+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 07:47:35 policy-pap | [2025-06-16T07:46:11.581+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 77 ms. Found 7 JPA repository interfaces. 07:47:35 policy-pap | [2025-06-16T07:46:12.567+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 07:47:35 policy-pap | [2025-06-16T07:46:12.580+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 07:47:35 policy-pap | [2025-06-16T07:46:12.582+00:00|INFO|StandardService|main] Starting service [Tomcat] 07:47:35 policy-pap | [2025-06-16T07:46:12.582+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 07:47:35 policy-pap | [2025-06-16T07:46:12.635+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 07:47:35 policy-pap | [2025-06-16T07:46:12.635+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2459 ms 07:47:35 policy-pap | [2025-06-16T07:46:13.039+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 07:47:35 policy-pap | [2025-06-16T07:46:13.116+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 07:47:35 policy-pap | [2025-06-16T07:46:13.158+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 07:47:35 policy-pap | [2025-06-16T07:46:13.590+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 07:47:35 policy-pap | [2025-06-16T07:46:13.639+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 07:47:35 policy-pap | [2025-06-16T07:46:13.856+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd 07:47:35 policy-pap | [2025-06-16T07:46:13.857+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 07:47:35 policy-pap | [2025-06-16T07:46:13.956+00:00|INFO|pooling|main] HHH10001005: Database info: 07:47:35 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 07:47:35 policy-pap | Database driver: undefined/unknown 07:47:35 policy-pap | Database version: 16.4 07:47:35 policy-pap | Autocommit mode: undefined/unknown 07:47:35 policy-pap | Isolation level: undefined/unknown 07:47:35 policy-pap | Minimum pool size: undefined/unknown 07:47:35 policy-pap | Maximum pool size: undefined/unknown 07:47:35 policy-pap | [2025-06-16T07:46:15.867+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 07:47:35 policy-pap | [2025-06-16T07:46:15.870+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 07:47:35 policy-pap | [2025-06-16T07:46:17.039+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-pap | allow.auto.create.topics = true 07:47:35 policy-pap | auto.commit.interval.ms = 5000 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | auto.offset.reset = latest 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | check.crcs = true 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-1 07:47:35 policy-pap | client.rack = 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | default.api.timeout.ms = 60000 07:47:35 policy-pap | enable.auto.commit = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | exclude.internal.topics = true 07:47:35 policy-pap | fetch.max.bytes = 52428800 07:47:35 policy-pap | fetch.max.wait.ms = 500 07:47:35 policy-pap | fetch.min.bytes = 1 07:47:35 policy-pap | group.id = 3eb43c00-b034-4edb-9227-bcdf22a1f069 07:47:35 policy-pap | group.instance.id = null 07:47:35 policy-pap | group.protocol = classic 07:47:35 policy-pap | group.remote.assignor = null 07:47:35 policy-pap | heartbeat.interval.ms = 3000 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | internal.leave.group.on.close = true 07:47:35 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-pap | isolation.level = read_uncommitted 07:47:35 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | max.partition.fetch.bytes = 1048576 07:47:35 policy-pap | max.poll.interval.ms = 300000 07:47:35 policy-pap | max.poll.records = 500 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-pap | receive.buffer.bytes = 65536 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | session.timeout.ms = 45000 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:17.093+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:17.230+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:17.230+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:17.230+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059977228 07:47:35 policy-pap | [2025-06-16T07:46:17.232+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-1, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-pap | [2025-06-16T07:46:17.233+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-pap | allow.auto.create.topics = true 07:47:35 policy-pap | auto.commit.interval.ms = 5000 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | auto.offset.reset = latest 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | check.crcs = true 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = consumer-policy-pap-2 07:47:35 policy-pap | client.rack = 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | default.api.timeout.ms = 60000 07:47:35 policy-pap | enable.auto.commit = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | exclude.internal.topics = true 07:47:35 policy-pap | fetch.max.bytes = 52428800 07:47:35 policy-pap | fetch.max.wait.ms = 500 07:47:35 policy-pap | fetch.min.bytes = 1 07:47:35 policy-pap | group.id = policy-pap 07:47:35 policy-pap | group.instance.id = null 07:47:35 policy-pap | group.protocol = classic 07:47:35 policy-pap | group.remote.assignor = null 07:47:35 policy-pap | heartbeat.interval.ms = 3000 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | internal.leave.group.on.close = true 07:47:35 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-pap | isolation.level = read_uncommitted 07:47:35 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | max.partition.fetch.bytes = 1048576 07:47:35 policy-pap | max.poll.interval.ms = 300000 07:47:35 policy-pap | max.poll.records = 500 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-pap | receive.buffer.bytes = 65536 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | session.timeout.ms = 45000 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:17.233+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:17.249+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:17.249+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:17.249+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059977249 07:47:35 policy-pap | [2025-06-16T07:46:17.249+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-pap | [2025-06-16T07:46:17.557+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=drools, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Drools 1.0.0, onap.policies.native.drools.Controller 1.0.0, onap.policies.native.drools.Artifact 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 07:47:35 policy-pap | [2025-06-16T07:46:17.671+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 07:47:35 policy-pap | [2025-06-16T07:46:17.741+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 07:47:35 policy-pap | [2025-06-16T07:46:17.945+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 07:47:35 policy-pap | [2025-06-16T07:46:18.652+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 07:47:35 policy-pap | [2025-06-16T07:46:18.757+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 07:47:35 policy-pap | [2025-06-16T07:46:18.776+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 07:47:35 policy-pap | [2025-06-16T07:46:18.797+00:00|INFO|ServiceManager|main] Policy PAP starting 07:47:35 policy-pap | [2025-06-16T07:46:18.798+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 07:47:35 policy-pap | [2025-06-16T07:46:18.798+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 07:47:35 policy-pap | [2025-06-16T07:46:18.799+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 07:47:35 policy-pap | [2025-06-16T07:46:18.799+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 07:47:35 policy-pap | [2025-06-16T07:46:18.799+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 07:47:35 policy-pap | [2025-06-16T07:46:18.800+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 07:47:35 policy-pap | [2025-06-16T07:46:18.801+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3eb43c00-b034-4edb-9227-bcdf22a1f069, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2d45db20 07:47:35 policy-pap | [2025-06-16T07:46:18.812+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3eb43c00-b034-4edb-9227-bcdf22a1f069, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.812+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-pap | allow.auto.create.topics = true 07:47:35 policy-pap | auto.commit.interval.ms = 5000 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | auto.offset.reset = latest 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | check.crcs = true 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3 07:47:35 policy-pap | client.rack = 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | default.api.timeout.ms = 60000 07:47:35 policy-pap | enable.auto.commit = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | exclude.internal.topics = true 07:47:35 policy-pap | fetch.max.bytes = 52428800 07:47:35 policy-pap | fetch.max.wait.ms = 500 07:47:35 policy-pap | fetch.min.bytes = 1 07:47:35 policy-pap | group.id = 3eb43c00-b034-4edb-9227-bcdf22a1f069 07:47:35 policy-pap | group.instance.id = null 07:47:35 policy-pap | group.protocol = classic 07:47:35 policy-pap | group.remote.assignor = null 07:47:35 policy-pap | heartbeat.interval.ms = 3000 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | internal.leave.group.on.close = true 07:47:35 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-pap | isolation.level = read_uncommitted 07:47:35 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | max.partition.fetch.bytes = 1048576 07:47:35 policy-pap | max.poll.interval.ms = 300000 07:47:35 policy-pap | max.poll.records = 500 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-pap | receive.buffer.bytes = 65536 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | session.timeout.ms = 45000 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:18.813+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:18.819+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:18.819+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:18.819+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059978819 07:47:35 policy-pap | [2025-06-16T07:46:18.820+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-pap | [2025-06-16T07:46:18.820+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 07:47:35 policy-pap | [2025-06-16T07:46:18.820+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a5bae0e2-07ec-4fb1-ac78-615435898053, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7540aa55 07:47:35 policy-pap | [2025-06-16T07:46:18.820+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a5bae0e2-07ec-4fb1-ac78-615435898053, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.820+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 07:47:35 policy-pap | allow.auto.create.topics = true 07:47:35 policy-pap | auto.commit.interval.ms = 5000 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | auto.offset.reset = latest 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | check.crcs = true 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = consumer-policy-pap-4 07:47:35 policy-pap | client.rack = 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | default.api.timeout.ms = 60000 07:47:35 policy-pap | enable.auto.commit = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | exclude.internal.topics = true 07:47:35 policy-pap | fetch.max.bytes = 52428800 07:47:35 policy-pap | fetch.max.wait.ms = 500 07:47:35 policy-pap | fetch.min.bytes = 1 07:47:35 policy-pap | group.id = policy-pap 07:47:35 policy-pap | group.instance.id = null 07:47:35 policy-pap | group.protocol = classic 07:47:35 policy-pap | group.remote.assignor = null 07:47:35 policy-pap | heartbeat.interval.ms = 3000 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | internal.leave.group.on.close = true 07:47:35 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 07:47:35 policy-pap | isolation.level = read_uncommitted 07:47:35 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | max.partition.fetch.bytes = 1048576 07:47:35 policy-pap | max.poll.interval.ms = 300000 07:47:35 policy-pap | max.poll.records = 500 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 07:47:35 policy-pap | receive.buffer.bytes = 65536 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | session.timeout.ms = 45000 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:18.821+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:18.826+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:18.826+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:18.826+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059978826 07:47:35 policy-pap | [2025-06-16T07:46:18.826+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 07:47:35 policy-pap | [2025-06-16T07:46:18.826+00:00|INFO|ServiceManager|main] Policy PAP starting topics 07:47:35 policy-pap | [2025-06-16T07:46:18.827+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a5bae0e2-07ec-4fb1-ac78-615435898053, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.827+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3eb43c00-b034-4edb-9227-bcdf22a1f069, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.827+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8bd8ec4b-9146-4ffe-9107-61620701a65b, alive=false, publisher=null]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.838+00:00|INFO|ProducerConfig|main] ProducerConfig values: 07:47:35 policy-pap | acks = -1 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | batch.size = 16384 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | buffer.memory = 33554432 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = producer-1 07:47:35 policy-pap | compression.gzip.level = -1 07:47:35 policy-pap | compression.lz4.level = 9 07:47:35 policy-pap | compression.type = none 07:47:35 policy-pap | compression.zstd.level = 3 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | delivery.timeout.ms = 120000 07:47:35 policy-pap | enable.idempotence = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-pap | linger.ms = 0 07:47:35 policy-pap | max.block.ms = 60000 07:47:35 policy-pap | max.in.flight.requests.per.connection = 5 07:47:35 policy-pap | max.request.size = 1048576 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.max.idle.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partitioner.adaptive.partitioning.enable = true 07:47:35 policy-pap | partitioner.availability.timeout.ms = 0 07:47:35 policy-pap | partitioner.class = null 07:47:35 policy-pap | partitioner.ignore.keys = false 07:47:35 policy-pap | receive.buffer.bytes = 32768 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retries = 2147483647 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | transaction.timeout.ms = 60000 07:47:35 policy-pap | transactional.id = null 07:47:35 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:18.839+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:18.850+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 07:47:35 policy-pap | [2025-06-16T07:46:18.875+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:18.875+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:18.875+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059978875 07:47:35 policy-pap | [2025-06-16T07:46:18.876+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8bd8ec4b-9146-4ffe-9107-61620701a65b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 07:47:35 policy-pap | [2025-06-16T07:46:18.876+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e9051d24-cebd-42d5-ab4a-87b7c65188ae, alive=false, publisher=null]]: starting 07:47:35 policy-pap | [2025-06-16T07:46:18.876+00:00|INFO|ProducerConfig|main] ProducerConfig values: 07:47:35 policy-pap | acks = -1 07:47:35 policy-pap | auto.include.jmx.reporter = true 07:47:35 policy-pap | batch.size = 16384 07:47:35 policy-pap | bootstrap.servers = [kafka:9092] 07:47:35 policy-pap | buffer.memory = 33554432 07:47:35 policy-pap | client.dns.lookup = use_all_dns_ips 07:47:35 policy-pap | client.id = producer-2 07:47:35 policy-pap | compression.gzip.level = -1 07:47:35 policy-pap | compression.lz4.level = 9 07:47:35 policy-pap | compression.type = none 07:47:35 policy-pap | compression.zstd.level = 3 07:47:35 policy-pap | connections.max.idle.ms = 540000 07:47:35 policy-pap | delivery.timeout.ms = 120000 07:47:35 policy-pap | enable.idempotence = true 07:47:35 policy-pap | enable.metrics.push = true 07:47:35 policy-pap | interceptor.classes = [] 07:47:35 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-pap | linger.ms = 0 07:47:35 policy-pap | max.block.ms = 60000 07:47:35 policy-pap | max.in.flight.requests.per.connection = 5 07:47:35 policy-pap | max.request.size = 1048576 07:47:35 policy-pap | metadata.max.age.ms = 300000 07:47:35 policy-pap | metadata.max.idle.ms = 300000 07:47:35 policy-pap | metadata.recovery.strategy = none 07:47:35 policy-pap | metric.reporters = [] 07:47:35 policy-pap | metrics.num.samples = 2 07:47:35 policy-pap | metrics.recording.level = INFO 07:47:35 policy-pap | metrics.sample.window.ms = 30000 07:47:35 policy-pap | partitioner.adaptive.partitioning.enable = true 07:47:35 policy-pap | partitioner.availability.timeout.ms = 0 07:47:35 policy-pap | partitioner.class = null 07:47:35 policy-pap | partitioner.ignore.keys = false 07:47:35 policy-pap | receive.buffer.bytes = 32768 07:47:35 policy-pap | reconnect.backoff.max.ms = 1000 07:47:35 policy-pap | reconnect.backoff.ms = 50 07:47:35 policy-pap | request.timeout.ms = 30000 07:47:35 policy-pap | retries = 2147483647 07:47:35 policy-pap | retry.backoff.max.ms = 1000 07:47:35 policy-pap | retry.backoff.ms = 100 07:47:35 policy-pap | sasl.client.callback.handler.class = null 07:47:35 policy-pap | sasl.jaas.config = null 07:47:35 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 07:47:35 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 07:47:35 policy-pap | sasl.kerberos.service.name = null 07:47:35 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 07:47:35 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 07:47:35 policy-pap | sasl.login.callback.handler.class = null 07:47:35 policy-pap | sasl.login.class = null 07:47:35 policy-pap | sasl.login.connect.timeout.ms = null 07:47:35 policy-pap | sasl.login.read.timeout.ms = null 07:47:35 policy-pap | sasl.login.refresh.buffer.seconds = 300 07:47:35 policy-pap | sasl.login.refresh.min.period.seconds = 60 07:47:35 policy-pap | sasl.login.refresh.window.factor = 0.8 07:47:35 policy-pap | sasl.login.refresh.window.jitter = 0.05 07:47:35 policy-pap | sasl.login.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.login.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.mechanism = GSSAPI 07:47:35 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 07:47:35 policy-pap | sasl.oauthbearer.expected.audience = null 07:47:35 policy-pap | sasl.oauthbearer.expected.issuer = null 07:47:35 policy-pap | sasl.oauthbearer.header.urlencode = false 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 07:47:35 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 07:47:35 policy-pap | sasl.oauthbearer.scope.claim.name = scope 07:47:35 policy-pap | sasl.oauthbearer.sub.claim.name = sub 07:47:35 policy-pap | sasl.oauthbearer.token.endpoint.url = null 07:47:35 policy-pap | security.protocol = PLAINTEXT 07:47:35 policy-pap | security.providers = null 07:47:35 policy-pap | send.buffer.bytes = 131072 07:47:35 policy-pap | socket.connection.setup.timeout.max.ms = 30000 07:47:35 policy-pap | socket.connection.setup.timeout.ms = 10000 07:47:35 policy-pap | ssl.cipher.suites = null 07:47:35 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 07:47:35 policy-pap | ssl.endpoint.identification.algorithm = https 07:47:35 policy-pap | ssl.engine.factory.class = null 07:47:35 policy-pap | ssl.key.password = null 07:47:35 policy-pap | ssl.keymanager.algorithm = SunX509 07:47:35 policy-pap | ssl.keystore.certificate.chain = null 07:47:35 policy-pap | ssl.keystore.key = null 07:47:35 policy-pap | ssl.keystore.location = null 07:47:35 policy-pap | ssl.keystore.password = null 07:47:35 policy-pap | ssl.keystore.type = JKS 07:47:35 policy-pap | ssl.protocol = TLSv1.3 07:47:35 policy-pap | ssl.provider = null 07:47:35 policy-pap | ssl.secure.random.implementation = null 07:47:35 policy-pap | ssl.trustmanager.algorithm = PKIX 07:47:35 policy-pap | ssl.truststore.certificates = null 07:47:35 policy-pap | ssl.truststore.location = null 07:47:35 policy-pap | ssl.truststore.password = null 07:47:35 policy-pap | ssl.truststore.type = JKS 07:47:35 policy-pap | transaction.timeout.ms = 60000 07:47:35 policy-pap | transactional.id = null 07:47:35 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 07:47:35 policy-pap | 07:47:35 policy-pap | [2025-06-16T07:46:18.876+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 07:47:35 policy-pap | [2025-06-16T07:46:18.877+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750059978881 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e9051d24-cebd-42d5-ab4a-87b7c65188ae, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 07:47:35 policy-pap | [2025-06-16T07:46:18.882+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 07:47:35 policy-pap | [2025-06-16T07:46:18.884+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 07:47:35 policy-pap | [2025-06-16T07:46:18.888+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 07:47:35 policy-pap | [2025-06-16T07:46:18.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 07:47:35 policy-pap | [2025-06-16T07:46:18.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 07:47:35 policy-pap | [2025-06-16T07:46:18.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 07:47:35 policy-pap | [2025-06-16T07:46:18.889+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 07:47:35 policy-pap | [2025-06-16T07:46:18.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 07:47:35 policy-pap | [2025-06-16T07:46:18.890+00:00|INFO|TimerManager|Thread-9] timer manager update started 07:47:35 policy-pap | [2025-06-16T07:46:18.890+00:00|INFO|ServiceManager|main] Policy PAP started 07:47:35 policy-pap | [2025-06-16T07:46:18.890+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.549 seconds (process running for 10.108) 07:47:35 policy-pap | [2025-06-16T07:46:19.333+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 07:47:35 policy-pap | [2025-06-16T07:46:19.333+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-pap | [2025-06-16T07:46:19.333+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-pap | [2025-06-16T07:46:19.339+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-pap | [2025-06-16T07:46:19.396+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 07:47:35 policy-pap | [2025-06-16T07:46:19.401+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 07:47:35 policy-pap | [2025-06-16T07:46:19.421+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:19.421+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 3qbXtuCCQ9WamUW573wmtQ 07:47:35 policy-pap | [2025-06-16T07:46:19.548+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 07:47:35 policy-pap | [2025-06-16T07:46:19.569+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:19.781+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:19.829+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:20.255+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:20.302+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 07:47:35 policy-pap | [2025-06-16T07:46:21.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 07:47:35 policy-pap | [2025-06-16T07:46:21.134+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 07:47:35 policy-pap | [2025-06-16T07:46:21.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c 07:47:35 policy-pap | [2025-06-16T07:46:21.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 07:47:35 policy-pap | [2025-06-16T07:46:21.184+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 07:47:35 policy-pap | [2025-06-16T07:46:21.187+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] (Re-)joining group 07:47:35 policy-pap | [2025-06-16T07:46:21.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Request joining group due to: need to re-join with the given member-id: consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc 07:47:35 policy-pap | [2025-06-16T07:46:21.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] (Re-)joining group 07:47:35 policy-pap | [2025-06-16T07:46:24.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c', protocol='range'} 07:47:35 policy-pap | [2025-06-16T07:46:24.193+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Successfully joined group with generation Generation{generationId=1, memberId='consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc', protocol='range'} 07:47:35 policy-pap | [2025-06-16T07:46:24.201+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Finished assignment for group at generation 1: {consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc=Assignment(partitions=[policy-pdp-pap-0])} 07:47:35 policy-pap | [2025-06-16T07:46:24.201+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c=Assignment(partitions=[policy-pdp-pap-0])} 07:47:35 policy-pap | [2025-06-16T07:46:24.228+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Successfully synced group in generation Generation{generationId=1, memberId='consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3-77691028-bcd7-4a3f-8b74-cc52e5aaa8bc', protocol='range'} 07:47:35 policy-pap | [2025-06-16T07:46:24.229+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-baf0e6eb-1f0d-40a9-a1d3-2d5d2f786e8c', protocol='range'} 07:47:35 policy-pap | [2025-06-16T07:46:24.229+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 07:47:35 policy-pap | [2025-06-16T07:46:24.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 07:47:35 policy-pap | [2025-06-16T07:46:24.233+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Adding newly assigned partitions: policy-pdp-pap-0 07:47:35 policy-pap | [2025-06-16T07:46:24.233+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 07:47:35 policy-pap | [2025-06-16T07:46:24.247+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Found no committed offset for partition policy-pdp-pap-0 07:47:35 policy-pap | [2025-06-16T07:46:24.248+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 07:47:35 policy-pap | [2025-06-16T07:46:24.260+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3eb43c00-b034-4edb-9227-bcdf22a1f069-3, groupId=3eb43c00-b034-4edb-9227-bcdf22a1f069] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 07:47:35 policy-pap | [2025-06-16T07:46:24.260+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 07:47:35 policy-pap | [2025-06-16T07:46:41.620+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 07:47:35 policy-pap | [2025-06-16T07:46:41.620+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 07:47:35 policy-pap | [2025-06-16T07:46:41.622+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 07:47:35 postgres | The files belonging to this database system will be owned by user "postgres". 07:47:35 postgres | This user must also own the server process. 07:47:35 postgres | 07:47:35 postgres | The database cluster will be initialized with locale "en_US.utf8". 07:47:35 postgres | The default database encoding has accordingly been set to "UTF8". 07:47:35 postgres | The default text search configuration will be set to "english". 07:47:35 postgres | 07:47:35 postgres | Data page checksums are disabled. 07:47:35 postgres | 07:47:35 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 07:47:35 postgres | creating subdirectories ... ok 07:47:35 postgres | selecting dynamic shared memory implementation ... posix 07:47:35 postgres | selecting default max_connections ... 100 07:47:35 postgres | selecting default shared_buffers ... 128MB 07:47:35 postgres | selecting default time zone ... Etc/UTC 07:47:35 postgres | creating configuration files ... ok 07:47:35 postgres | running bootstrap script ... ok 07:47:35 postgres | performing post-bootstrap initialization ... ok 07:47:35 postgres | initdb: warning: enabling "trust" authentication for local connections 07:47:35 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 07:47:35 postgres | syncing data to disk ... ok 07:47:35 postgres | 07:47:35 postgres | 07:47:35 postgres | Success. You can now start the database server using: 07:47:35 postgres | 07:47:35 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 07:47:35 postgres | 07:47:35 postgres | waiting for server to start....2025-06-16 07:45:42.692 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 07:47:35 postgres | 2025-06-16 07:45:42.715 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 07:47:35 postgres | 2025-06-16 07:45:42.722 UTC [51] LOG: database system was shut down at 2025-06-16 07:45:42 UTC 07:47:35 postgres | 2025-06-16 07:45:42.728 UTC [48] LOG: database system is ready to accept connections 07:47:35 postgres | done 07:47:35 postgres | server started 07:47:35 postgres | 07:47:35 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 07:47:35 postgres | 07:47:35 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 07:47:35 postgres | #!/bin/bash -xv 07:47:35 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 07:47:35 postgres | # 07:47:35 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 07:47:35 postgres | # you may not use this file except in compliance with the License. 07:47:35 postgres | # You may obtain a copy of the License at 07:47:35 postgres | # 07:47:35 postgres | # http://www.apache.org/licenses/LICENSE-2.0 07:47:35 postgres | # 07:47:35 postgres | # Unless required by applicable law or agreed to in writing, software 07:47:35 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 07:47:35 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 07:47:35 postgres | # See the License for the specific language governing permissions and 07:47:35 postgres | # limitations under the License. 07:47:35 postgres | 07:47:35 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 07:47:35 postgres | CREATE ROLE 07:47:35 postgres | 07:47:35 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | do 07:47:35 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 07:47:35 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 07:47:35 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 07:47:35 postgres | done 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 07:47:35 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 07:47:35 postgres | CREATE DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 07:47:35 postgres | ALTER DATABASE 07:47:35 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 07:47:35 postgres | GRANT 07:47:35 postgres | 07:47:35 postgres | waiting for server to shut down...2025-06-16 07:45:44.093 UTC [48] LOG: received fast shutdown request 07:47:35 postgres | .2025-06-16 07:45:44.094 UTC [48] LOG: aborting any active transactions 07:47:35 postgres | 2025-06-16 07:45:44.096 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 07:47:35 postgres | 2025-06-16 07:45:44.098 UTC [49] LOG: shutting down 07:47:35 postgres | 2025-06-16 07:45:44.100 UTC [49] LOG: checkpoint starting: shutdown immediate 07:47:35 postgres | 2025-06-16 07:45:44.622 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.356 s, sync=0.156 s, total=0.525 s; sync files=1788, longest=0.003 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 07:47:35 postgres | 2025-06-16 07:45:44.635 UTC [48] LOG: database system is shut down 07:47:35 postgres | done 07:47:35 postgres | server stopped 07:47:35 postgres | 07:47:35 postgres | PostgreSQL init process complete; ready for start up. 07:47:35 postgres | 07:47:35 postgres | 2025-06-16 07:45:44.722 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 07:47:35 postgres | 2025-06-16 07:45:44.723 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 07:47:35 postgres | 2025-06-16 07:45:44.723 UTC [1] LOG: listening on IPv6 address "::", port 5432 07:47:35 postgres | 2025-06-16 07:45:44.726 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 07:47:35 postgres | 2025-06-16 07:45:44.734 UTC [101] LOG: database system was shut down at 2025-06-16 07:45:44 UTC 07:47:35 postgres | 2025-06-16 07:45:44.739 UTC [1] LOG: database system is ready to accept connections 07:47:36 prometheus | time=2025-06-16T07:45:40.638Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 07:47:36 prometheus | time=2025-06-16T07:45:40.638Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 07:47:36 prometheus | time=2025-06-16T07:45:40.638Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 07:47:36 prometheus | time=2025-06-16T07:45:40.642Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 07:47:36 prometheus | time=2025-06-16T07:45:40.645Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 07:47:36 prometheus | time=2025-06-16T07:45:40.645Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 07:47:36 prometheus | time=2025-06-16T07:45:40.648Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 07:47:36 prometheus | time=2025-06-16T07:45:40.648Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 07:47:36 prometheus | time=2025-06-16T07:45:40.651Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 07:47:36 prometheus | time=2025-06-16T07:45:40.651Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.78µs 07:47:36 prometheus | time=2025-06-16T07:45:40.651Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 07:47:36 prometheus | time=2025-06-16T07:45:40.652Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=716.723µs 07:47:36 prometheus | time=2025-06-16T07:45:40.652Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=40.471µs wal_replay_duration=741.873µs wbl_replay_duration=310ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.78µs total_replay_duration=844.135µs 07:47:36 prometheus | time=2025-06-16T07:45:40.655Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 07:47:36 prometheus | time=2025-06-16T07:45:40.655Z level=INFO source=main.go:1290 msg="TSDB started" 07:47:36 prometheus | time=2025-06-16T07:45:40.655Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 07:47:36 prometheus | time=2025-06-16T07:45:40.656Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 07:47:36 prometheus | time=2025-06-16T07:45:40.656Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.66µs remote_storage=1.98µs web_handler=400ns query_engine=1.17µs scrape=290.585µs scrape_sd=143.182µs notify=132.273µs notify_sd=13.27µs rules=1.84µs tracing=5.56µs filename=/etc/prometheus/prometheus.yml totalDuration=1.180461ms 07:47:36 prometheus | time=2025-06-16T07:45:40.656Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 07:47:36 prometheus | time=2025-06-16T07:45:40.656Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 07:47:36 zookeeper | ===> User 07:47:36 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 07:47:36 zookeeper | ===> Configuring ... 07:47:36 zookeeper | ===> Running preflight checks ... 07:47:36 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 07:47:36 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 07:47:36 zookeeper | ===> Launching ... 07:47:36 zookeeper | ===> Launching zookeeper ... 07:47:36 zookeeper | [2025-06-16 07:45:46,750] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,752] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,752] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,752] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,752] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,754] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 07:47:36 zookeeper | [2025-06-16 07:45:46,754] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 07:47:36 zookeeper | [2025-06-16 07:45:46,754] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 07:47:36 zookeeper | [2025-06-16 07:45:46,754] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 07:47:36 zookeeper | [2025-06-16 07:45:46,755] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 07:47:36 zookeeper | [2025-06-16 07:45:46,756] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,756] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,756] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,756] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,757] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 07:47:36 zookeeper | [2025-06-16 07:45:46,757] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 07:47:36 zookeeper | [2025-06-16 07:45:46,767] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 07:47:36 zookeeper | [2025-06-16 07:45:46,769] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 07:47:36 zookeeper | [2025-06-16 07:45:46,769] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 07:47:36 zookeeper | [2025-06-16 07:45:46,771] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 07:47:36 zookeeper | [2025-06-16 07:45:46,778] INFO (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,778] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,778] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,778] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,779] INFO (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,780] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,780] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,781] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,782] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,783] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,783] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,783] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,783] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,784] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 07:47:36 zookeeper | [2025-06-16 07:45:46,785] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,785] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,786] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 07:47:36 zookeeper | [2025-06-16 07:45:46,786] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,787] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 07:47:36 zookeeper | [2025-06-16 07:45:46,789] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,789] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,790] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 07:47:36 zookeeper | [2025-06-16 07:45:46,790] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 07:47:36 zookeeper | [2025-06-16 07:45:46,790] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,809] INFO Logging initialized @448ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 07:47:36 zookeeper | [2025-06-16 07:45:46,861] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 07:47:36 zookeeper | [2025-06-16 07:45:46,861] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 07:47:36 zookeeper | [2025-06-16 07:45:46,877] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 07:47:36 zookeeper | [2025-06-16 07:45:46,917] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 07:47:36 zookeeper | [2025-06-16 07:45:46,917] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 07:47:36 zookeeper | [2025-06-16 07:45:46,919] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 07:47:36 zookeeper | [2025-06-16 07:45:46,923] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 07:47:36 zookeeper | [2025-06-16 07:45:46,932] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 07:47:36 zookeeper | [2025-06-16 07:45:46,944] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 07:47:36 zookeeper | [2025-06-16 07:45:46,945] INFO Started @588ms (org.eclipse.jetty.server.Server) 07:47:36 zookeeper | [2025-06-16 07:45:46,945] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 07:47:36 zookeeper | [2025-06-16 07:45:46,950] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,950] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,952] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,953] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,972] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,973] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 07:47:36 zookeeper | [2025-06-16 07:45:46,974] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 07:47:36 zookeeper | [2025-06-16 07:45:46,974] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 07:47:36 zookeeper | [2025-06-16 07:45:46,986] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 07:47:36 zookeeper | [2025-06-16 07:45:46,986] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 07:47:36 zookeeper | [2025-06-16 07:45:46,990] INFO Snapshot loaded in 15 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 07:47:36 zookeeper | [2025-06-16 07:45:46,991] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 07:47:36 zookeeper | [2025-06-16 07:45:46,991] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 07:47:36 zookeeper | [2025-06-16 07:45:47,004] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 07:47:36 zookeeper | [2025-06-16 07:45:47,005] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 07:47:36 zookeeper | [2025-06-16 07:45:47,021] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 07:47:36 zookeeper | [2025-06-16 07:45:47,023] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 07:47:36 zookeeper | [2025-06-16 07:45:48,116] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 07:47:36 Tearing down containers... 07:47:36 Container policy-drools-pdp Stopping 07:47:36 Container grafana Stopping 07:47:36 Container policy-csit Stopping 07:47:36 Container policy-csit Stopped 07:47:36 Container policy-csit Removing 07:47:36 Container policy-csit Removed 07:47:36 Container grafana Stopped 07:47:36 Container grafana Removing 07:47:36 Container grafana Removed 07:47:36 Container prometheus Stopping 07:47:36 Container prometheus Stopped 07:47:36 Container prometheus Removing 07:47:37 Container prometheus Removed 07:47:46 Container policy-drools-pdp Stopped 07:47:46 Container policy-drools-pdp Removing 07:47:46 Container policy-drools-pdp Removed 07:47:46 Container policy-pap Stopping 07:47:56 Container policy-pap Stopped 07:47:56 Container policy-pap Removing 07:47:56 Container policy-pap Removed 07:47:56 Container kafka Stopping 07:47:56 Container policy-api Stopping 07:47:57 Container kafka Stopped 07:47:57 Container kafka Removing 07:47:57 Container kafka Removed 07:47:57 Container zookeeper Stopping 07:47:58 Container zookeeper Stopped 07:47:58 Container zookeeper Removing 07:47:58 Container zookeeper Removed 07:48:07 Container policy-api Stopped 07:48:07 Container policy-api Removing 07:48:07 Container policy-api Removed 07:48:07 Container policy-db-migrator Stopping 07:48:07 Container policy-db-migrator Stopped 07:48:07 Container policy-db-migrator Removing 07:48:07 Container policy-db-migrator Removed 07:48:07 Container postgres Stopping 07:48:07 Container postgres Stopped 07:48:07 Container postgres Removing 07:48:07 Container postgres Removed 07:48:07 Network compose_default Removing 07:48:07 Network compose_default Removed 07:48:07 $ ssh-agent -k 07:48:07 unset SSH_AUTH_SOCK; 07:48:07 unset SSH_AGENT_PID; 07:48:07 echo Agent pid 2035 killed; 07:48:07 [ssh-agent] Stopped. 07:48:07 Robot results publisher started... 07:48:07 INFO: Checking test criticality is deprecated and will be dropped in a future release! 07:48:07 -Parsing output xml: 07:48:08 Done! 07:48:08 -Copying log files to build dir: 07:48:08 Done! 07:48:08 -Assigning results to build: 07:48:08 Done! 07:48:08 -Checking thresholds: 07:48:08 Done! 07:48:08 Done publishing Robot results. 07:48:08 [PostBuildScript] - [INFO] Executing post build scripts. 07:48:08 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins12038964929803820992.sh 07:48:08 ---> sysstat.sh 07:48:08 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins3404427908425614140.sh 07:48:08 ---> package-listing.sh 07:48:08 ++ facter osfamily 07:48:08 ++ tr '[:upper:]' '[:lower:]' 07:48:09 + OS_FAMILY=debian 07:48:09 + workspace=/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp 07:48:09 + START_PACKAGES=/tmp/packages_start.txt 07:48:09 + END_PACKAGES=/tmp/packages_end.txt 07:48:09 + DIFF_PACKAGES=/tmp/packages_diff.txt 07:48:09 + PACKAGES=/tmp/packages_start.txt 07:48:09 + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' 07:48:09 + PACKAGES=/tmp/packages_end.txt 07:48:09 + case "${OS_FAMILY}" in 07:48:09 + dpkg -l 07:48:09 + grep '^ii' 07:48:09 + '[' -f /tmp/packages_start.txt ']' 07:48:09 + '[' -f /tmp/packages_end.txt ']' 07:48:09 + diff /tmp/packages_start.txt /tmp/packages_end.txt 07:48:09 + '[' /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp ']' 07:48:09 + mkdir -p /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ 07:48:09 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/archives/ 07:48:09 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins7237569710577594811.sh 07:48:09 ---> capture-instance-metadata.sh 07:48:09 Setup pyenv: 07:48:09 system 07:48:09 3.8.13 07:48:09 3.9.13 07:48:09 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) 07:48:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cXar from file:/tmp/.os_lf_venv 07:48:11 lf-activate-venv(): INFO: Installing: lftools 07:48:19 lf-activate-venv(): INFO: Adding /tmp/venv-cXar/bin to PATH 07:48:19 INFO: Running in OpenStack, capturing instance metadata 07:48:19 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins9353328464067176176.sh 07:48:19 provisioning config files... 07:48:19 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-drools-pdp-master-project-csit-drools-pdp@tmp/config10445518711285351325tmp 07:48:20 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 07:48:20 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 07:48:20 [EnvInject] - Injecting environment variables from a build step. 07:48:20 [EnvInject] - Injecting as environment variables the properties content 07:48:20 SERVER_ID=logs 07:48:20 07:48:20 [EnvInject] - Variables injected successfully. 07:48:20 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins13374907560193811729.sh 07:48:20 ---> create-netrc.sh 07:48:20 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins15317857490154401338.sh 07:48:20 ---> python-tools-install.sh 07:48:20 Setup pyenv: 07:48:20 system 07:48:20 3.8.13 07:48:20 3.9.13 07:48:20 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) 07:48:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cXar from file:/tmp/.os_lf_venv 07:48:22 lf-activate-venv(): INFO: Installing: lftools 07:48:29 lf-activate-venv(): INFO: Adding /tmp/venv-cXar/bin to PATH 07:48:29 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins11776189427734271667.sh 07:48:29 ---> sudo-logs.sh 07:48:29 Archiving 'sudo' log.. 07:48:29 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash /tmp/jenkins3976695874337847266.sh 07:48:29 ---> job-cost.sh 07:48:30 Setup pyenv: 07:48:30 system 07:48:30 3.8.13 07:48:30 3.9.13 07:48:30 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) 07:48:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cXar from file:/tmp/.os_lf_venv 07:48:32 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 07:48:36 lf-activate-venv(): INFO: Adding /tmp/venv-cXar/bin to PATH 07:48:36 INFO: No Stack... 07:48:36 INFO: Retrieving Pricing Info for: v3-standard-8 07:48:37 INFO: Archiving Costs 07:48:37 [policy-drools-pdp-master-project-csit-drools-pdp] $ /bin/bash -l /tmp/jenkins4060465462738912975.sh 07:48:37 ---> logs-deploy.sh 07:48:37 Setup pyenv: 07:48:37 system 07:48:37 3.8.13 07:48:37 3.9.13 07:48:37 * 3.10.6 (set by /w/workspace/policy-drools-pdp-master-project-csit-drools-pdp/.python-version) 07:48:37 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-cXar from file:/tmp/.os_lf_venv 07:48:39 lf-activate-venv(): INFO: Installing: lftools 07:48:47 lf-activate-venv(): INFO: Adding /tmp/venv-cXar/bin to PATH 07:48:47 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-drools-pdp-master-project-csit-drools-pdp/2035 07:48:47 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 07:48:48 Archives upload complete. 07:48:48 INFO: archiving logs to Nexus 07:48:49 ---> uname -a: 07:48:49 Linux prd-ubuntu1804-docker-8c-8g-21536 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 07:48:49 07:48:49 07:48:49 ---> lscpu: 07:48:49 Architecture: x86_64 07:48:49 CPU op-mode(s): 32-bit, 64-bit 07:48:49 Byte Order: Little Endian 07:48:49 CPU(s): 8 07:48:49 On-line CPU(s) list: 0-7 07:48:49 Thread(s) per core: 1 07:48:49 Core(s) per socket: 1 07:48:49 Socket(s): 8 07:48:49 NUMA node(s): 1 07:48:49 Vendor ID: AuthenticAMD 07:48:49 CPU family: 23 07:48:49 Model: 49 07:48:49 Model name: AMD EPYC-Rome Processor 07:48:49 Stepping: 0 07:48:49 CPU MHz: 2800.000 07:48:49 BogoMIPS: 5600.00 07:48:49 Virtualization: AMD-V 07:48:49 Hypervisor vendor: KVM 07:48:49 Virtualization type: full 07:48:49 L1d cache: 32K 07:48:49 L1i cache: 32K 07:48:49 L2 cache: 512K 07:48:49 L3 cache: 16384K 07:48:49 NUMA node0 CPU(s): 0-7 07:48:49 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 07:48:49 07:48:49 07:48:49 ---> nproc: 07:48:49 8 07:48:49 07:48:49 07:48:49 ---> df -h: 07:48:49 Filesystem Size Used Avail Use% Mounted on 07:48:49 udev 16G 0 16G 0% /dev 07:48:49 tmpfs 3.2G 708K 3.2G 1% /run 07:48:49 /dev/vda1 155G 15G 140G 10% / 07:48:49 tmpfs 16G 0 16G 0% /dev/shm 07:48:49 tmpfs 5.0M 0 5.0M 0% /run/lock 07:48:49 tmpfs 16G 0 16G 0% /sys/fs/cgroup 07:48:49 /dev/vda15 105M 4.4M 100M 5% /boot/efi 07:48:49 tmpfs 3.2G 0 3.2G 0% /run/user/1001 07:48:49 07:48:49 07:48:49 ---> free -m: 07:48:49 total used free shared buff/cache available 07:48:49 Mem: 32167 848 23701 0 7616 30863 07:48:49 Swap: 1023 0 1023 07:48:49 07:48:49 07:48:49 ---> ip addr: 07:48:49 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 07:48:49 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 07:48:49 inet 127.0.0.1/8 scope host lo 07:48:49 valid_lft forever preferred_lft forever 07:48:49 inet6 ::1/128 scope host 07:48:49 valid_lft forever preferred_lft forever 07:48:49 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 07:48:49 link/ether fa:16:3e:26:4a:70 brd ff:ff:ff:ff:ff:ff 07:48:49 inet 10.30.106.227/23 brd 10.30.107.255 scope global dynamic ens3 07:48:49 valid_lft 86070sec preferred_lft 86070sec 07:48:49 inet6 fe80::f816:3eff:fe26:4a70/64 scope link 07:48:49 valid_lft forever preferred_lft forever 07:48:49 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 07:48:49 link/ether 02:42:a3:6d:9a:fe brd ff:ff:ff:ff:ff:ff 07:48:49 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 07:48:49 valid_lft forever preferred_lft forever 07:48:49 inet6 fe80::42:a3ff:fe6d:9afe/64 scope link 07:48:49 valid_lft forever preferred_lft forever 07:48:49 07:48:49 07:48:49 ---> sar -b -r -n DEV: 07:48:49 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21536) 06/16/25 _x86_64_ (8 CPU) 07:48:49 07:48:49 07:43:21 LINUX RESTART (8 CPU) 07:48:49 07:48:49 07:44:02 tps rtps wtps bread/s bwrtn/s 07:48:49 07:45:01 190.10 24.14 165.97 2380.07 52088.00 07:48:49 07:46:01 696.03 4.68 691.35 431.46 239014.86 07:48:49 07:47:01 110.18 0.20 109.98 31.19 54895.78 07:48:49 07:48:01 189.35 0.25 189.10 21.06 46314.55 07:48:49 Average: 296.88 7.25 289.64 708.90 98278.51 07:48:49 07:48:49 07:44:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:48:49 07:45:01 26586936 31555788 6352284 19.28 101268 5084820 2447180 7.20 1040012 4863660 3213244 07:48:49 07:46:01 23781812 30733096 9157408 27.80 164224 6861288 6727196 19.79 2083992 6421284 132 07:48:49 07:47:01 22251272 29739020 10687948 32.45 181844 7346876 8545396 25.14 3190720 6785548 19256 07:48:49 07:48:01 23734320 31168824 9204900 27.95 206884 7261264 2923148 8.60 1823152 6700600 80 07:48:49 Average: 24088585 30799182 8850635 26.87 163555 6638562 5160730 15.18 2034469 6192773 808178 07:48:49 07:48:49 07:44:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 07:48:49 07:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:48:49 07:45:01 ens3 1287.73 774.59 34618.78 65.13 0.00 0.00 0.00 0.00 07:48:49 07:45:01 lo 14.17 14.17 1.33 1.33 0.00 0.00 0.00 0.00 07:48:49 07:46:01 vethf48d82b 1.52 1.50 0.16 0.16 0.00 0.00 0.00 0.00 07:48:49 07:46:01 veth104dcf3 0.30 0.47 0.02 0.03 0.00 0.00 0.00 0.00 07:48:49 07:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:48:49 07:46:01 vethfacc011 58.31 79.77 9.04 10.84 0.00 0.00 0.00 0.00 07:48:49 07:47:01 vethf48d82b 14.06 11.31 1.59 1.71 0.00 0.00 0.00 0.00 07:48:49 07:47:01 veth104dcf3 3.07 3.90 0.42 0.34 0.00 0.00 0.00 0.00 07:48:49 07:47:01 docker0 101.45 131.36 5.37 1062.48 0.00 0.00 0.00 0.00 07:48:49 07:47:01 vethfacc011 92.05 93.12 18.82 15.90 0.00 0.00 0.00 0.00 07:48:49 07:48:01 docker0 43.08 55.86 3.69 295.56 0.00 0.00 0.00 0.00 07:48:49 07:48:01 vethfacc011 0.17 0.62 0.01 0.04 0.00 0.00 0.00 0.00 07:48:49 07:48:01 ens3 2065.12 1273.94 42302.65 173.66 0.00 0.00 0.00 0.00 07:48:49 07:48:01 veth665f163 91.97 91.97 16.03 18.65 0.00 0.00 0.00 0.00 07:48:49 Average: docker0 36.28 47.00 2.27 340.93 0.00 0.00 0.00 0.00 07:48:49 Average: vethfacc011 37.79 43.56 7.00 6.72 0.00 0.00 0.00 0.00 07:48:49 Average: ens3 434.41 269.64 10412.89 26.52 0.00 0.00 0.00 0.00 07:48:49 Average: veth665f163 23.09 23.09 4.03 4.68 0.00 0.00 0.00 0.00 07:48:49 07:48:49 07:48:49 ---> sar -P ALL: 07:48:49 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21536) 06/16/25 _x86_64_ (8 CPU) 07:48:49 07:48:49 07:43:21 LINUX RESTART (8 CPU) 07:48:49 07:48:49 07:44:02 CPU %user %nice %system %iowait %steal %idle 07:48:49 07:45:01 all 14.79 0.00 3.83 2.05 0.05 79.28 07:48:49 07:45:01 0 5.90 0.00 3.49 0.20 0.03 90.37 07:48:49 07:45:01 1 28.97 0.00 4.66 1.00 0.05 65.32 07:48:49 07:45:01 2 13.50 0.00 4.08 0.74 0.03 81.64 07:48:49 07:45:01 3 7.75 0.00 3.22 12.23 0.07 76.74 07:48:49 07:45:01 4 34.07 0.00 5.00 1.57 0.07 59.30 07:48:49 07:45:01 5 12.71 0.00 3.35 0.14 0.03 83.77 07:48:49 07:45:01 6 7.90 0.00 3.51 0.20 0.03 88.36 07:48:49 07:45:01 7 7.49 0.00 3.32 0.37 0.03 88.78 07:48:49 07:46:01 all 18.95 0.00 6.93 10.10 0.08 63.93 07:48:49 07:46:01 0 19.61 0.00 6.64 4.65 0.08 69.02 07:48:49 07:46:01 1 21.39 0.00 7.95 21.17 0.08 49.42 07:48:49 07:46:01 2 15.57 0.00 5.69 3.38 0.07 75.28 07:48:49 07:46:01 3 18.11 0.00 7.09 5.45 0.07 69.27 07:48:49 07:46:01 4 19.21 0.00 7.32 7.35 0.08 66.03 07:48:49 07:46:01 5 20.54 0.00 8.68 26.25 0.12 44.43 07:48:49 07:46:01 6 19.46 0.00 5.50 1.97 0.08 72.99 07:48:49 07:46:01 7 17.76 0.00 6.64 10.77 0.07 64.76 07:48:49 07:47:01 all 20.00 0.00 2.22 1.66 0.06 76.06 07:48:49 07:47:01 0 15.48 0.00 2.23 0.44 0.07 81.79 07:48:49 07:47:01 1 24.63 0.00 3.27 2.57 0.07 69.45 07:48:49 07:47:01 2 15.14 0.00 1.70 0.18 0.07 82.91 07:48:49 07:47:01 3 20.56 0.00 1.99 0.08 0.05 77.32 07:48:49 07:47:01 4 21.30 0.00 2.27 0.05 0.05 76.33 07:48:49 07:47:01 5 22.88 0.00 2.25 2.16 0.08 72.62 07:48:49 07:47:01 6 16.07 0.00 1.83 7.62 0.08 74.40 07:48:49 07:47:01 7 23.91 0.00 2.21 0.18 0.05 73.64 07:48:49 07:48:01 all 5.61 0.00 1.81 1.62 0.04 90.91 07:48:49 07:48:01 0 5.01 0.00 1.64 2.19 0.05 91.11 07:48:49 07:48:01 1 4.18 0.00 2.25 6.78 0.05 86.75 07:48:49 07:48:01 2 6.46 0.00 1.43 0.03 0.03 92.04 07:48:49 07:48:01 3 6.00 0.00 2.48 0.28 0.05 91.18 07:48:49 07:48:01 4 5.44 0.00 1.42 0.59 0.05 92.50 07:48:49 07:48:01 5 6.26 0.00 1.69 0.15 0.03 91.87 07:48:49 07:48:01 6 5.05 0.00 2.03 2.77 0.05 90.10 07:48:49 07:48:01 7 6.53 0.00 1.56 0.22 0.05 91.65 07:48:49 Average: all 14.83 0.00 3.69 3.86 0.06 77.56 07:48:49 Average: 0 11.50 0.00 3.49 1.87 0.06 83.08 07:48:49 Average: 1 19.75 0.00 4.53 7.88 0.06 67.78 07:48:49 Average: 2 12.67 0.00 3.22 1.09 0.05 82.97 07:48:49 Average: 3 13.12 0.00 3.69 4.47 0.06 78.66 07:48:49 Average: 4 19.94 0.00 3.99 2.39 0.06 73.62 07:48:49 Average: 5 15.59 0.00 3.98 7.15 0.07 73.22 07:48:49 Average: 6 12.13 0.00 3.21 3.15 0.06 81.44 07:48:49 Average: 7 13.94 0.00 3.42 2.88 0.05 79.70 07:48:49 07:48:49 07:48:49