Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-19311 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-newdelhi-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-l0HyXoH4xUcC/agent.2055 SSH_AGENT_PID=2057 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_5136536270119473797.key (/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/private_key_5136536270119473797.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-newdelhi-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/newdelhi^{commit} # timeout=10 Checking out Revision a0de87f9d2d88fd7f870703053c99c7149d608ec (refs/remotes/origin/newdelhi) > git config core.sparsecheckout # timeout=10 > git checkout -f a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=30 Commit message: "Fix timeout in pap CSIT for auditing undeploys" > git rev-list --no-walk a0de87f9d2d88fd7f870703053c99c7149d608ec # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins11918472373864399799.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-fzrt lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-fzrt/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-fzrt/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.32 botocore==1.38.32 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.0 kubernetes==32.0.1 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.0 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.13 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh /tmp/jenkins7000989274667982087.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/sh -xe /tmp/jenkins16210382643822177104.sh + /w/workspace/policy-pap-newdelhi-project-csit-pap/csit/run-project-csit.sh pap WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.0M 100 60.0M 0 0 77.6M 0 --:--:-- --:--:-- --:--:-- 77.6M Setting project configuration for: pap Configuring docker compose... Starting apex-pdp application with Grafana mariadb Pulling prometheus Pulling kafka Pulling simulator Pulling policy-db-migrator Pulling grafana Pulling apex-pdp Pulling zookeeper Pulling api Pulling pap Pulling 31e352740f53 Pulling fs layer ad1782e4d1ef Pulling fs layer bc8105c6553b Pulling fs layer 929241f867bb Pulling fs layer 37728a7352e6 Pulling fs layer 3f40c7aa46a6 Pulling fs layer 353af139d39e Pulling fs layer 929241f867bb Waiting 3f40c7aa46a6 Waiting 353af139d39e Waiting 37728a7352e6 Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 665dfb3388a1 Pulling fs layer f270a5fd7930 Pulling fs layer 665dfb3388a1 Waiting 9038eaba24f8 Pulling fs layer 04a7796b82ca Pulling fs layer 9038eaba24f8 Waiting 04a7796b82ca Waiting 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer bda0b253c68f Pulling fs layer b9357b55a7a5 Pulling fs layer 4c3047628e17 Pulling fs layer 6cf350721225 Pulling fs layer de723b4c7ed9 Pulling fs layer de723b4c7ed9 Waiting 4c3047628e17 Waiting 6cf350721225 Waiting bda0b253c68f Waiting ecc4de98d537 Waiting b9357b55a7a5 Waiting bc8105c6553b Downloading [=> ] 3.002kB/84.13kB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB 31e352740f53 Downloading [> ] 48.06kB/3.398MB f18232174bc9 Pulling fs layer 65babbe3dfe5 Pulling fs layer 651b0ba49b07 Pulling fs layer d953cde4314b Pulling fs layer aecd4cb03450 Pulling fs layer 13fa68ca8757 Pulling fs layer 65babbe3dfe5 Waiting 651b0ba49b07 Waiting f836d47fdc4d Pulling fs layer 8b5292c940e1 Pulling fs layer aecd4cb03450 Waiting 454a4350d439 Pulling fs layer 13fa68ca8757 Waiting 9a8c18aee5ea Pulling fs layer f836d47fdc4d Waiting 8b5292c940e1 Waiting 454a4350d439 Waiting d953cde4314b Waiting f18232174bc9 Waiting 9a8c18aee5ea Waiting bc8105c6553b Download complete 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 1fe734c5fee3 Pulling fs layer c8e6f0452a8e Pulling fs layer 0143f8517101 Pulling fs layer ee69cc1a77e2 Pulling fs layer 81667b400b57 Pulling fs layer ecc4de98d537 Waiting ec3b6d0cc414 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB a8d3998ab21c Pulling fs layer 1fe734c5fee3 Waiting c8e6f0452a8e Waiting 89d6e2ec6372 Pulling fs layer 80096f8bb25e Pulling fs layer ee69cc1a77e2 Waiting 81667b400b57 Waiting 0143f8517101 Waiting cbd359ebc87d Pulling fs layer a8d3998ab21c Waiting 89d6e2ec6372 Waiting 80096f8bb25e Waiting ec3b6d0cc414 Waiting cbd359ebc87d Waiting ad1782e4d1ef Downloading [> ] 539.6kB/180.4MB 31e352740f53 Pulling fs layer ecc4de98d537 Pulling fs layer 145e9fcd3938 Pulling fs layer 4be774fd73e2 Pulling fs layer 71f834c33815 Pulling fs layer a40760cd2625 Pulling fs layer 114f99593bd8 Pulling fs layer 31e352740f53 Downloading [> ] 48.06kB/3.398MB ecc4de98d537 Waiting 145e9fcd3938 Waiting 4be774fd73e2 Waiting 71f834c33815 Waiting a40760cd2625 Waiting 114f99593bd8 Waiting 929241f867bb Downloading [==================================================>] 92B/92B 929241f867bb Verifying Checksum 929241f867bb Download complete 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 1617e25568b2 Waiting 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 6ac0e4adf315 Waiting 7221d93db8a9 Pulling fs layer f3b09c502777 Waiting 7df673c7455d Pulling fs layer 408012a7b118 Waiting 44986281b8b9 Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting 9fa9226be034 Waiting 7221d93db8a9 Waiting 37728a7352e6 Downloading [==================================================>] 92B/92B 37728a7352e6 Verifying Checksum 37728a7352e6 Download complete 10ac4908093d Pulling fs layer 44779101e748 Pulling fs layer a721db3e3f3d Pulling fs layer 1850a929b84a Pulling fs layer 397a918c7da3 Pulling fs layer 806be17e856d Pulling fs layer 634de6c90876 Pulling fs layer cd00854cfb1a Pulling fs layer 44779101e748 Waiting a721db3e3f3d Waiting 1850a929b84a Waiting 397a918c7da3 Waiting 806be17e856d Waiting 634de6c90876 Waiting cd00854cfb1a Waiting 10ac4908093d Waiting 3f40c7aa46a6 Downloading [==================================================>] 302B/302B 3f40c7aa46a6 Verifying Checksum 3f40c7aa46a6 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Download complete 31e352740f53 Verifying Checksum 31e352740f53 Download complete 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 31e352740f53 Extracting [> ] 65.54kB/3.398MB 353af139d39e Downloading [> ] 539.6kB/246.5MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ecc4de98d537 Downloading [> ] 539.6kB/73.93MB ad1782e4d1ef Downloading [==> ] 8.65MB/180.4MB 31e352740f53 Extracting [======================> ] 1.507MB/3.398MB 31e352740f53 Extracting [======================> ] 1.507MB/3.398MB 31e352740f53 Extracting [======================> ] 1.507MB/3.398MB 31e352740f53 Extracting [======================> ] 1.507MB/3.398MB 31e352740f53 Extracting [======================> ] 1.507MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 31e352740f53 Extracting [==================================================>] 3.398MB/3.398MB 353af139d39e Downloading [=> ] 9.731MB/246.5MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ecc4de98d537 Downloading [=====> ] 7.568MB/73.93MB ad1782e4d1ef Downloading [======> ] 22.71MB/180.4MB 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete 31e352740f53 Pull complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 45fd2fec8a19 Waiting 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 8f10199ed94b Waiting 41dac8b43ba6 Pulling fs layer eca0188f477e Waiting 71a9f6a9ab4d Pulling fs layer f963a77d2726 Waiting 79161a3f5362 Waiting c81b87c3efcc Pulling fs layer 2e8a7df9c2ee Waiting 5ee96432c7eb Pulling fs layer 71a9f6a9ab4d Waiting f3a82e9f1761 Waiting 9c266ba63f51 Waiting e444bcd4d577 Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 5ee96432c7eb Waiting eabd8714fec9 Waiting 353af139d39e Downloading [===> ] 19.46MB/246.5MB ecc4de98d537 Downloading [==========> ] 15.14MB/73.93MB ecc4de98d537 Downloading [==========> ] 15.14MB/73.93MB ecc4de98d537 Downloading [==========> ] 15.14MB/73.93MB ecc4de98d537 Downloading [==========> ] 15.14MB/73.93MB ad1782e4d1ef Downloading [==========> ] 38.39MB/180.4MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer e444bcd4d577 Waiting f963a77d2726 Waiting eca0188f477e Waiting 45fd2fec8a19 Waiting 10f05dd8b1db Pulling fs layer eabd8714fec9 Waiting 41dac8b43ba6 Pulling fs layer 8f10199ed94b Waiting 71a9f6a9ab4d Pulling fs layer 9c266ba63f51 Waiting da3ed5db7103 Pulling fs layer f3a82e9f1761 Waiting c955f6e31a04 Pulling fs layer 79161a3f5362 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting c955f6e31a04 Waiting da3ed5db7103 Waiting 353af139d39e Downloading [======> ] 32.44MB/246.5MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ecc4de98d537 Downloading [================> ] 24.33MB/73.93MB ad1782e4d1ef Downloading [==============> ] 54.07MB/180.4MB 353af139d39e Downloading [=========> ] 45.42MB/246.5MB ecc4de98d537 Downloading [======================> ] 32.98MB/73.93MB ecc4de98d537 Downloading [======================> ] 32.98MB/73.93MB ecc4de98d537 Downloading [======================> ] 32.98MB/73.93MB ecc4de98d537 Downloading [======================> ] 32.98MB/73.93MB ad1782e4d1ef Downloading [===================> ] 70.83MB/180.4MB 353af139d39e Downloading [===========> ] 54.61MB/246.5MB ecc4de98d537 Downloading [============================> ] 41.63MB/73.93MB ecc4de98d537 Downloading [============================> ] 41.63MB/73.93MB ecc4de98d537 Downloading [============================> ] 41.63MB/73.93MB ecc4de98d537 Downloading [============================> ] 41.63MB/73.93MB ad1782e4d1ef Downloading [========================> ] 87.05MB/180.4MB 353af139d39e Downloading [=============> ] 64.34MB/246.5MB ecc4de98d537 Downloading [=================================> ] 49.74MB/73.93MB ecc4de98d537 Downloading [=================================> ] 49.74MB/73.93MB ecc4de98d537 Downloading [=================================> ] 49.74MB/73.93MB ecc4de98d537 Downloading [=================================> ] 49.74MB/73.93MB ad1782e4d1ef Downloading [============================> ] 101.6MB/180.4MB 353af139d39e Downloading [===============> ] 75.15MB/246.5MB ecc4de98d537 Downloading [=======================================> ] 57.85MB/73.93MB ecc4de98d537 Downloading [=======================================> ] 57.85MB/73.93MB ecc4de98d537 Downloading [=======================================> ] 57.85MB/73.93MB ecc4de98d537 Downloading [=======================================> ] 57.85MB/73.93MB ad1782e4d1ef Downloading [================================> ] 117.9MB/180.4MB 353af139d39e Downloading [=================> ] 87.59MB/246.5MB ecc4de98d537 Downloading [==============================================> ] 69.2MB/73.93MB ecc4de98d537 Downloading [==============================================> ] 69.2MB/73.93MB ecc4de98d537 Downloading [==============================================> ] 69.2MB/73.93MB ecc4de98d537 Downloading [==============================================> ] 69.2MB/73.93MB ad1782e4d1ef Downloading [=====================================> ] 135.2MB/180.4MB ecc4de98d537 Verifying Checksum ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Download complete ecc4de98d537 Download complete 665dfb3388a1 Downloading [==================================================>] 303B/303B 665dfb3388a1 Verifying Checksum 665dfb3388a1 Download complete 353af139d39e Downloading [=====================> ] 103.8MB/246.5MB ad1782e4d1ef Downloading [=========================================> ] 150.8MB/180.4MB f270a5fd7930 Downloading [> ] 539.6kB/159.1MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB ecc4de98d537 Extracting [> ] 557.1kB/73.93MB 353af139d39e Downloading [========================> ] 120MB/246.5MB ad1782e4d1ef Downloading [==============================================> ] 167.6MB/180.4MB f270a5fd7930 Downloading [=> ] 3.243MB/159.1MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB ecc4de98d537 Extracting [===> ] 4.456MB/73.93MB 353af139d39e Downloading [===========================> ] 137.3MB/246.5MB ad1782e4d1ef Verifying Checksum ad1782e4d1ef Download complete 9038eaba24f8 Download complete 04a7796b82ca Downloading [==================================================>] 1.127kB/1.127kB 04a7796b82ca Verifying Checksum 04a7796b82ca Download complete ecc4de98d537 Extracting [======> ] 10.03MB/73.93MB ecc4de98d537 Extracting [======> ] 10.03MB/73.93MB ecc4de98d537 Extracting [======> ] 10.03MB/73.93MB ecc4de98d537 Extracting [======> ] 10.03MB/73.93MB f270a5fd7930 Downloading [=> ] 5.946MB/159.1MB bda0b253c68f Downloading [==================================================>] 292B/292B bda0b253c68f Verifying Checksum bda0b253c68f Download complete 353af139d39e Downloading [==============================> ] 151.9MB/246.5MB b9357b55a7a5 Downloading [=> ] 3.001kB/127kB b9357b55a7a5 Verifying Checksum b9357b55a7a5 Download complete ad1782e4d1ef Extracting [> ] 557.1kB/180.4MB 4c3047628e17 Downloading [==================================================>] 1.324kB/1.324kB 4c3047628e17 Verifying Checksum 4c3047628e17 Download complete ecc4de98d537 Extracting [==========> ] 15.04MB/73.93MB ecc4de98d537 Extracting [==========> ] 15.04MB/73.93MB ecc4de98d537 Extracting [==========> ] 15.04MB/73.93MB ecc4de98d537 Extracting [==========> ] 15.04MB/73.93MB 6cf350721225 Downloading [> ] 539.6kB/98.32MB f270a5fd7930 Downloading [==> ] 9.19MB/159.1MB 353af139d39e Downloading [=================================> ] 167.6MB/246.5MB ad1782e4d1ef Extracting [=> ] 5.014MB/180.4MB ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB ecc4de98d537 Extracting [==============> ] 21.17MB/73.93MB 6cf350721225 Downloading [====> ] 9.731MB/98.32MB f270a5fd7930 Downloading [=====> ] 18.38MB/159.1MB 353af139d39e Downloading [=====================================> ] 184.9MB/246.5MB ad1782e4d1ef Extracting [====> ] 15.04MB/180.4MB ecc4de98d537 Extracting [=================> ] 26.18MB/73.93MB ecc4de98d537 Extracting [=================> ] 26.18MB/73.93MB ecc4de98d537 Extracting [=================> ] 26.18MB/73.93MB ecc4de98d537 Extracting [=================> ] 26.18MB/73.93MB 6cf350721225 Downloading [============> ] 24.87MB/98.32MB f270a5fd7930 Downloading [=========> ] 30.28MB/159.1MB 353af139d39e Downloading [========================================> ] 201.7MB/246.5MB ad1782e4d1ef Extracting [=======> ] 26.74MB/180.4MB ecc4de98d537 Extracting [=====================> ] 31.2MB/73.93MB ecc4de98d537 Extracting [=====================> ] 31.2MB/73.93MB ecc4de98d537 Extracting [=====================> ] 31.2MB/73.93MB ecc4de98d537 Extracting [=====================> ] 31.2MB/73.93MB 6cf350721225 Downloading [====================> ] 41.09MB/98.32MB f270a5fd7930 Downloading [=============> ] 42.71MB/159.1MB 353af139d39e Downloading [============================================> ] 218.4MB/246.5MB ad1782e4d1ef Extracting [=========> ] 34.54MB/180.4MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB ecc4de98d537 Extracting [========================> ] 36.21MB/73.93MB 6cf350721225 Downloading [=============================> ] 57.31MB/98.32MB f270a5fd7930 Downloading [=================> ] 55.15MB/159.1MB 353af139d39e Downloading [===============================================> ] 235.7MB/246.5MB ad1782e4d1ef Extracting [===========> ] 42.89MB/180.4MB 353af139d39e Verifying Checksum 353af139d39e Download complete ecc4de98d537 Extracting [===========================> ] 41.22MB/73.93MB ecc4de98d537 Extracting [===========================> ] 41.22MB/73.93MB ecc4de98d537 Extracting [===========================> ] 41.22MB/73.93MB ecc4de98d537 Extracting [===========================> ] 41.22MB/73.93MB 6cf350721225 Downloading [====================================> ] 71.37MB/98.32MB f270a5fd7930 Downloading [====================> ] 65.96MB/159.1MB de723b4c7ed9 Downloading [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Verifying Checksum de723b4c7ed9 Download complete ad1782e4d1ef Extracting [===============> ] 56.82MB/180.4MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB ecc4de98d537 Extracting [==============================> ] 45.12MB/73.93MB ecc4de98d537 Extracting [==============================> ] 45.12MB/73.93MB ecc4de98d537 Extracting [==============================> ] 45.12MB/73.93MB ecc4de98d537 Extracting [==============================> ] 45.12MB/73.93MB f270a5fd7930 Downloading [=========================> ] 81.1MB/159.1MB 6cf350721225 Downloading [============================================> ] 87.05MB/98.32MB ad1782e4d1ef Extracting [===================> ] 69.07MB/180.4MB f18232174bc9 Downloading [==================> ] 1.375MB/3.642MB 6cf350721225 Verifying Checksum 6cf350721225 Download complete 65babbe3dfe5 Downloading [==================================================>] 141B/141B 65babbe3dfe5 Verifying Checksum 65babbe3dfe5 Download complete ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB ecc4de98d537 Extracting [================================> ] 48.46MB/73.93MB f270a5fd7930 Downloading [==============================> ] 98.4MB/159.1MB 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB ad1782e4d1ef Extracting [======================> ] 80.77MB/180.4MB f18232174bc9 Downloading [===============================================> ] 3.44MB/3.642MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB d953cde4314b Downloading [> ] 97.22kB/8.735MB f270a5fd7930 Downloading [====================================> ] 114.6MB/159.1MB 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Verifying Checksum 651b0ba49b07 Download complete ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB ecc4de98d537 Extracting [===================================> ] 52.92MB/73.93MB aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB aecd4cb03450 Download complete 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB ad1782e4d1ef Extracting [========================> ] 87.46MB/180.4MB 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Verifying Checksum 13fa68ca8757 Download complete f18232174bc9 Extracting [=============> ] 983kB/3.642MB f836d47fdc4d Downloading [> ] 539.6kB/107.3MB d953cde4314b Verifying Checksum d953cde4314b Download complete 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB f270a5fd7930 Downloading [========================================> ] 128.7MB/159.1MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB ecc4de98d537 Extracting [======================================> ] 56.82MB/73.93MB ecc4de98d537 Extracting [======================================> ] 56.82MB/73.93MB ecc4de98d537 Extracting [======================================> ] 56.82MB/73.93MB ecc4de98d537 Extracting [======================================> ] 56.82MB/73.93MB ad1782e4d1ef Extracting [=========================> ] 90.24MB/180.4MB f836d47fdc4d Downloading [===> ] 6.487MB/107.3MB f18232174bc9 Pull complete 65babbe3dfe5 Extracting [==================================================>] 141B/141B 65babbe3dfe5 Extracting [==================================================>] 141B/141B 8b5292c940e1 Downloading [=====> ] 7.568MB/63.48MB f270a5fd7930 Downloading [===========================================> ] 138.4MB/159.1MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ecc4de98d537 Extracting [=======================================> ] 59.05MB/73.93MB ad1782e4d1ef Extracting [=========================> ] 93.03MB/180.4MB f836d47fdc4d Downloading [=======> ] 16.22MB/107.3MB 8b5292c940e1 Downloading [=================> ] 22.71MB/63.48MB f270a5fd7930 Downloading [================================================> ] 153.5MB/159.1MB f270a5fd7930 Verifying Checksum f270a5fd7930 Download complete ecc4de98d537 Extracting [==========================================> ] 62.95MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.95MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.95MB/73.93MB ecc4de98d537 Extracting [==========================================> ] 62.95MB/73.93MB 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 454a4350d439 Downloading [==================================================>] 11.93kB/11.93kB 454a4350d439 Download complete ad1782e4d1ef Extracting [===========================> ] 98.04MB/180.4MB f836d47fdc4d Downloading [=============> ] 28.65MB/107.3MB 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Download complete 8b5292c940e1 Downloading [=========================> ] 32.98MB/63.48MB 1fe734c5fee3 Downloading [> ] 343kB/32.94MB ad1782e4d1ef Extracting [============================> ] 101.4MB/180.4MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB ecc4de98d537 Extracting [============================================> ] 66.29MB/73.93MB f836d47fdc4d Downloading [===================> ] 41.63MB/107.3MB 8b5292c940e1 Downloading [=======================================> ] 49.74MB/63.48MB 1fe734c5fee3 Downloading [==> ] 1.719MB/32.94MB ad1782e4d1ef Extracting [=============================> ] 106.4MB/180.4MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB ecc4de98d537 Extracting [===============================================> ] 70.75MB/73.93MB f836d47fdc4d Downloading [==========================> ] 56.77MB/107.3MB 8b5292c940e1 Verifying Checksum 8b5292c940e1 Download complete c8e6f0452a8e Download complete 0143f8517101 Downloading [============================> ] 3.003kB/5.324kB 0143f8517101 Downloading [==================================================>] 5.324kB/5.324kB 0143f8517101 Verifying Checksum 0143f8517101 Download complete ee69cc1a77e2 Downloading [============================> ] 3.003kB/5.312kB ee69cc1a77e2 Downloading [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Verifying Checksum ee69cc1a77e2 Download complete 1fe734c5fee3 Downloading [========> ] 5.504MB/32.94MB 81667b400b57 Downloading [==================================================>] 1.034kB/1.034kB 81667b400b57 Verifying Checksum 81667b400b57 Download complete ad1782e4d1ef Extracting [==============================> ] 111.4MB/180.4MB f836d47fdc4d Downloading [=================================> ] 71.91MB/107.3MB ec3b6d0cc414 Download complete 65babbe3dfe5 Pull complete a8d3998ab21c Downloading [==========> ] 3.002kB/13.9kB a8d3998ab21c Downloading [==================================================>] 13.9kB/13.9kB a8d3998ab21c Verifying Checksum a8d3998ab21c Download complete ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB ecc4de98d537 Extracting [==================================================>] 73.93MB/73.93MB 89d6e2ec6372 Downloading [==========> ] 3.002kB/13.79kB 89d6e2ec6372 Downloading [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Download complete 80096f8bb25e Downloading [==================================================>] 2.238kB/2.238kB 80096f8bb25e Verifying Checksum 80096f8bb25e Download complete cbd359ebc87d Downloading [==================================================>] 2.23kB/2.23kB cbd359ebc87d Download complete 1fe734c5fee3 Downloading [=====================> ] 14.11MB/32.94MB 145e9fcd3938 Downloading [==================================================>] 294B/294B 145e9fcd3938 Download complete ad1782e4d1ef Extracting [===============================> ] 115.3MB/180.4MB f836d47fdc4d Downloading [=========================================> ] 88.67MB/107.3MB 4be774fd73e2 Downloading [=> ] 3.001kB/127.4kB 4be774fd73e2 Downloading [==================================================>] 127.4kB/127.4kB 4be774fd73e2 Download complete 71f834c33815 Downloading [==================================================>] 1.147kB/1.147kB 71f834c33815 Verifying Checksum 71f834c33815 Download complete a40760cd2625 Downloading [> ] 539.6kB/84.46MB 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 1fe734c5fee3 Downloading [=====================================> ] 24.43MB/32.94MB ad1782e4d1ef Extracting [================================> ] 118.7MB/180.4MB f836d47fdc4d Downloading [===============================================> ] 101.1MB/107.3MB ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete ecc4de98d537 Pull complete 145e9fcd3938 Extracting [==================================================>] 294B/294B bda0b253c68f Extracting [==================================================>] 292B/292B 665dfb3388a1 Extracting [==================================================>] 303B/303B bda0b253c68f Extracting [==================================================>] 292B/292B 145e9fcd3938 Extracting [==================================================>] 294B/294B 665dfb3388a1 Extracting [==================================================>] 303B/303B f836d47fdc4d Verifying Checksum f836d47fdc4d Download complete 1fe734c5fee3 Verifying Checksum 1fe734c5fee3 Download complete 651b0ba49b07 Extracting [========> ] 589.8kB/3.524MB a40760cd2625 Downloading [==> ] 4.865MB/84.46MB 114f99593bd8 Downloading [==================================================>] 1.119kB/1.119kB 114f99593bd8 Verifying Checksum 114f99593bd8 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Download complete ad1782e4d1ef Extracting [=================================> ] 121.4MB/180.4MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB bda0b253c68f Pull complete b9357b55a7a5 Extracting [============> ] 32.77kB/127kB 665dfb3388a1 Pull complete b9357b55a7a5 Extracting [==================================================>] 127kB/127kB b9357b55a7a5 Extracting [==================================================>] 127kB/127kB 651b0ba49b07 Extracting [===============================================> ] 3.342MB/3.524MB a40760cd2625 Downloading [==========> ] 17.3MB/84.46MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 145e9fcd3938 Pull complete 1fe734c5fee3 Extracting [> ] 360.4kB/32.94MB 4be774fd73e2 Extracting [============> ] 32.77kB/127.4kB 4be774fd73e2 Extracting [==================================================>] 127.4kB/127.4kB 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB ad1782e4d1ef Extracting [==================================> ] 124.8MB/180.4MB f3b09c502777 Downloading [===> ] 3.784MB/56.52MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB a40760cd2625 Downloading [=================> ] 30.28MB/84.46MB f270a5fd7930 Extracting [> ] 557.1kB/159.1MB 651b0ba49b07 Pull complete d953cde4314b Extracting [> ] 98.3kB/8.735MB 1fe734c5fee3 Extracting [====> ] 2.884MB/32.94MB b9357b55a7a5 Pull complete 9fa9226be034 Pull complete 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 4c3047628e17 Extracting [==================================================>] 1.324kB/1.324kB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 6ac0e4adf315 Downloading [=======> ] 9.731MB/62.07MB ad1782e4d1ef Extracting [===================================> ] 127MB/180.4MB f3b09c502777 Downloading [=======> ] 8.65MB/56.52MB 4be774fd73e2 Pull complete a40760cd2625 Downloading [========================> ] 41.09MB/84.46MB f270a5fd7930 Extracting [==> ] 8.913MB/159.1MB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB 71f834c33815 Extracting [==================================================>] 1.147kB/1.147kB d953cde4314b Extracting [==> ] 393.2kB/8.735MB 1fe734c5fee3 Extracting [=======> ] 5.046MB/32.94MB 6ac0e4adf315 Downloading [=============> ] 16.76MB/62.07MB ad1782e4d1ef Extracting [===================================> ] 129.2MB/180.4MB f3b09c502777 Downloading [=============> ] 15.68MB/56.52MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB f270a5fd7930 Extracting [====> ] 15.6MB/159.1MB a40760cd2625 Downloading [==============================> ] 51.9MB/84.46MB 4c3047628e17 Pull complete d953cde4314b Extracting [====================> ] 3.539MB/8.735MB 71f834c33815 Pull complete 6ac0e4adf315 Downloading [=====================> ] 26.49MB/62.07MB 1fe734c5fee3 Extracting [==========> ] 6.849MB/32.94MB f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB ad1782e4d1ef Extracting [====================================> ] 131.5MB/180.4MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB f270a5fd7930 Extracting [=======> ] 22.84MB/159.1MB a40760cd2625 Downloading [======================================> ] 65.42MB/84.46MB d953cde4314b Extracting [=================================> ] 5.898MB/8.735MB 1fe734c5fee3 Extracting [==============> ] 9.372MB/32.94MB 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB 6cf350721225 Extracting [> ] 557.1kB/98.32MB f3b09c502777 Downloading [=============================> ] 32.98MB/56.52MB ad1782e4d1ef Extracting [=====================================> ] 134.3MB/180.4MB a40760cd2625 Downloading [==============================================> ] 78.4MB/84.46MB f270a5fd7930 Extracting [========> ] 27.3MB/159.1MB 1617e25568b2 Pull complete d953cde4314b Extracting [============================================> ] 7.766MB/8.735MB a40760cd2625 Verifying Checksum a40760cd2625 Download complete 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB 1fe734c5fee3 Extracting [===================> ] 12.98MB/32.94MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 6cf350721225 Extracting [===> ] 7.242MB/98.32MB f3b09c502777 Downloading [======================================> ] 43.25MB/56.52MB d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete ad1782e4d1ef Extracting [=====================================> ] 137MB/180.4MB f270a5fd7930 Extracting [==========> ] 32.87MB/159.1MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete a40760cd2625 Extracting [> ] 557.1kB/84.46MB d953cde4314b Pull complete aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB 6ac0e4adf315 Downloading [===============================================> ] 59.47MB/62.07MB aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 10ac4908093d Downloading [> ] 310.2kB/30.43MB 1fe734c5fee3 Extracting [======================> ] 14.78MB/32.94MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 6cf350721225 Extracting [=======> ] 15.04MB/98.32MB f3b09c502777 Downloading [================================================> ] 54.61MB/56.52MB 44779101e748 Downloading [==================================================>] 1.744kB/1.744kB 44779101e748 Verifying Checksum 44779101e748 Download complete f3b09c502777 Verifying Checksum f3b09c502777 Download complete a721db3e3f3d Downloading [> ] 64.45kB/5.526MB 1850a929b84a Download complete f270a5fd7930 Extracting [===========> ] 37.88MB/159.1MB ad1782e4d1ef Extracting [======================================> ] 139.8MB/180.4MB 397a918c7da3 Downloading [==================================================>] 327B/327B 397a918c7da3 Verifying Checksum 397a918c7da3 Download complete a40760cd2625 Extracting [===> ] 6.685MB/84.46MB 806be17e856d Downloading [> ] 539.6kB/89.72MB 10ac4908093d Downloading [===========> ] 6.847MB/30.43MB 1fe734c5fee3 Extracting [==========================> ] 17.66MB/32.94MB 6cf350721225 Extracting [==========> ] 20.61MB/98.32MB a721db3e3f3d Verifying Checksum a721db3e3f3d Download complete aecd4cb03450 Pull complete 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 634de6c90876 Downloading [===========================================> ] 3.011kB/3.49kB 634de6c90876 Downloading [==================================================>] 3.49kB/3.49kB 634de6c90876 Verifying Checksum 634de6c90876 Download complete f270a5fd7930 Extracting [=============> ] 44.01MB/159.1MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB cd00854cfb1a Downloading [=====================> ] 3.011kB/6.971kB cd00854cfb1a Downloading [==================================================>] 6.971kB/6.971kB cd00854cfb1a Verifying Checksum cd00854cfb1a Download complete a40760cd2625 Extracting [========> ] 13.93MB/84.46MB ad1782e4d1ef Extracting [=======================================> ] 142MB/180.4MB 806be17e856d Downloading [===> ] 5.946MB/89.72MB 10ac4908093d Downloading [============================> ] 17.12MB/30.43MB 6cf350721225 Extracting [=============> ] 26.74MB/98.32MB 1fe734c5fee3 Extracting [==============================> ] 20.19MB/32.94MB f270a5fd7930 Extracting [===============> ] 49.58MB/159.1MB 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB a40760cd2625 Extracting [===========> ] 20.05MB/84.46MB 806be17e856d Downloading [=======> ] 14.06MB/89.72MB 10ac4908093d Downloading [===========================================> ] 26.46MB/30.43MB ad1782e4d1ef Extracting [========================================> ] 144.8MB/180.4MB eca0188f477e Downloading [> ] 380.4kB/37.17MB eca0188f477e Downloading [> ] 380.4kB/37.17MB 6cf350721225 Extracting [================> ] 32.31MB/98.32MB 10ac4908093d Verifying Checksum 10ac4908093d Download complete 1fe734c5fee3 Extracting [================================> ] 21.63MB/32.94MB f270a5fd7930 Extracting [=================> ] 54.59MB/159.1MB 13fa68ca8757 Pull complete a40760cd2625 Extracting [================> ] 27.3MB/84.46MB 806be17e856d Downloading [============> ] 23.25MB/89.72MB 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB eca0188f477e Downloading [=======> ] 5.692MB/37.17MB eca0188f477e Downloading [=======> ] 5.692MB/37.17MB ad1782e4d1ef Extracting [========================================> ] 147.6MB/180.4MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete e444bcd4d577 Download complete 6cf350721225 Extracting [====================> ] 39.55MB/98.32MB 1fe734c5fee3 Extracting [===================================> ] 23.07MB/32.94MB 10ac4908093d Extracting [> ] 327.7kB/30.43MB f270a5fd7930 Extracting [===================> ] 60.72MB/159.1MB a40760cd2625 Extracting [===================> ] 33.42MB/84.46MB 806be17e856d Downloading [===================> ] 35.14MB/89.72MB eca0188f477e Downloading [===================> ] 14.74MB/37.17MB eca0188f477e Downloading [===================> ] 14.74MB/37.17MB f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 6cf350721225 Extracting [======================> ] 44.56MB/98.32MB eabd8714fec9 Downloading [> ] 540.1kB/375MB eabd8714fec9 Downloading [> ] 540.1kB/375MB ad1782e4d1ef Extracting [=========================================> ] 149.3MB/180.4MB 10ac4908093d Extracting [=====> ] 3.277MB/30.43MB f270a5fd7930 Extracting [====================> ] 65.73MB/159.1MB 806be17e856d Downloading [========================> ] 44.33MB/89.72MB 1fe734c5fee3 Extracting [====================================> ] 23.79MB/32.94MB a40760cd2625 Extracting [======================> ] 38.44MB/84.46MB f836d47fdc4d Extracting [=> ] 2.785MB/107.3MB 6cf350721225 Extracting [=========================> ] 49.58MB/98.32MB eca0188f477e Downloading [==========================> ] 20MB/37.17MB eca0188f477e Downloading [==========================> ] 20MB/37.17MB eabd8714fec9 Downloading [=> ] 9.112MB/375MB eabd8714fec9 Downloading [=> ] 9.112MB/375MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB ad1782e4d1ef Extracting [=========================================> ] 151MB/180.4MB f270a5fd7930 Extracting [======================> ] 70.75MB/159.1MB 10ac4908093d Extracting [=========> ] 5.571MB/30.43MB 806be17e856d Downloading [============================> ] 51.36MB/89.72MB 1fe734c5fee3 Extracting [=====================================> ] 24.87MB/32.94MB a40760cd2625 Extracting [=========================> ] 42.34MB/84.46MB eca0188f477e Downloading [=======================================> ] 29.41MB/37.17MB eca0188f477e Downloading [=======================================> ] 29.41MB/37.17MB 6cf350721225 Extracting [============================> ] 55.15MB/98.32MB eabd8714fec9 Downloading [==> ] 16.6MB/375MB eabd8714fec9 Downloading [==> ] 16.6MB/375MB f836d47fdc4d Extracting [==> ] 5.014MB/107.3MB 6ac0e4adf315 Extracting [========> ] 10.03MB/62.07MB ad1782e4d1ef Extracting [==========================================> ] 153.2MB/180.4MB f270a5fd7930 Extracting [=======================> ] 74.09MB/159.1MB 10ac4908093d Extracting [===========> ] 7.209MB/30.43MB 806be17e856d Downloading [================================> ] 58.93MB/89.72MB eca0188f477e Verifying Checksum eca0188f477e Verifying Checksum eca0188f477e Download complete eca0188f477e Download complete a40760cd2625 Extracting [===========================> ] 45.68MB/84.46MB 1fe734c5fee3 Extracting [========================================> ] 26.67MB/32.94MB 6cf350721225 Extracting [===============================> ] 61.28MB/98.32MB eabd8714fec9 Downloading [===> ] 25.18MB/375MB eabd8714fec9 Downloading [===> ] 25.18MB/375MB f836d47fdc4d Extracting [===> ] 7.242MB/107.3MB 6ac0e4adf315 Extracting [=========> ] 11.7MB/62.07MB f270a5fd7930 Extracting [========================> ] 78.54MB/159.1MB ad1782e4d1ef Extracting [===========================================> ] 155.4MB/180.4MB 806be17e856d Downloading [=======================================> ] 70.83MB/89.72MB 10ac4908093d Extracting [=============> ] 8.52MB/30.43MB a40760cd2625 Extracting [=============================> ] 50.14MB/84.46MB 45fd2fec8a19 Downloading [==========================================> ] 934B/1.103kB 45fd2fec8a19 Download complete 45fd2fec8a19 Downloading [==========================================> ] 934B/1.103kB 45fd2fec8a19 Download complete eabd8714fec9 Downloading [====> ] 34.85MB/375MB eabd8714fec9 Downloading [====> ] 34.85MB/375MB 6cf350721225 Extracting [================================> ] 64.62MB/98.32MB f836d47fdc4d Extracting [====> ] 9.47MB/107.3MB f270a5fd7930 Extracting [=========================> ] 82.44MB/159.1MB 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB 1fe734c5fee3 Extracting [=========================================> ] 27.39MB/32.94MB ad1782e4d1ef Extracting [===========================================> ] 157.6MB/180.4MB 806be17e856d Downloading [==============================================> ] 82.72MB/89.72MB eca0188f477e Extracting [> ] 393.2kB/37.17MB eca0188f477e Extracting [> ] 393.2kB/37.17MB a40760cd2625 Extracting [================================> ] 54.59MB/84.46MB 8f10199ed94b Downloading [> ] 90.38kB/8.768MB 8f10199ed94b Downloading [> ] 90.38kB/8.768MB 10ac4908093d Extracting [=================> ] 10.49MB/30.43MB 806be17e856d Verifying Checksum 806be17e856d Download complete eabd8714fec9 Downloading [=====> ] 42.89MB/375MB eabd8714fec9 Downloading [=====> ] 42.89MB/375MB f836d47fdc4d Extracting [=====> ] 11.7MB/107.3MB f270a5fd7930 Extracting [==========================> ] 85.79MB/159.1MB 6cf350721225 Extracting [===================================> ] 69.07MB/98.32MB 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 1fe734c5fee3 Extracting [============================================> ] 29.2MB/32.94MB eca0188f477e Extracting [====> ] 3.146MB/37.17MB eca0188f477e Extracting [====> ] 3.146MB/37.17MB a40760cd2625 Extracting [=================================> ] 57.38MB/84.46MB ad1782e4d1ef Extracting [============================================> ] 159.9MB/180.4MB 8f10199ed94b Downloading [==================================> ] 6.06MB/8.768MB 8f10199ed94b Downloading [==================================> ] 6.06MB/8.768MB 10ac4908093d Extracting [=====================> ] 13.11MB/30.43MB eabd8714fec9 Downloading [=======> ] 52.53MB/375MB eabd8714fec9 Downloading [=======> ] 52.53MB/375MB f963a77d2726 Downloading [==> ] 933B/21.44kB f963a77d2726 Downloading [==> ] 933B/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete f963a77d2726 Verifying Checksum f963a77d2726 Download complete 6cf350721225 Extracting [=====================================> ] 72.97MB/98.32MB f836d47fdc4d Extracting [======> ] 13.37MB/107.3MB f270a5fd7930 Extracting [============================> ] 90.24MB/159.1MB 8f10199ed94b Verifying Checksum 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 8f10199ed94b Download complete 6ac0e4adf315 Extracting [=============> ] 17.27MB/62.07MB 1fe734c5fee3 Extracting [=============================================> ] 30.28MB/32.94MB a40760cd2625 Extracting [===================================> ] 60.16MB/84.46MB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB ad1782e4d1ef Extracting [============================================> ] 161.5MB/180.4MB eabd8714fec9 Downloading [=======> ] 58.96MB/375MB eabd8714fec9 Downloading [=======> ] 58.96MB/375MB 10ac4908093d Extracting [=========================> ] 15.4MB/30.43MB f836d47fdc4d Extracting [=======> ] 15.04MB/107.3MB 6cf350721225 Extracting [=======================================> ] 77.43MB/98.32MB f3a82e9f1761 Downloading [> ] 458.2kB/44.41MB f3a82e9f1761 Downloading [> ] 458.2kB/44.41MB f270a5fd7930 Extracting [==============================> ] 96.37MB/159.1MB 79161a3f5362 Downloading [==========> ] 935B/4.656kB 79161a3f5362 Downloading [==========> ] 935B/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete 79161a3f5362 Download complete 6ac0e4adf315 Extracting [=================> ] 21.17MB/62.07MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB a40760cd2625 Extracting [======================================> ] 65.18MB/84.46MB 1fe734c5fee3 Extracting [===============================================> ] 31MB/32.94MB ad1782e4d1ef Extracting [=============================================> ] 163.8MB/180.4MB eabd8714fec9 Downloading [========> ] 65.37MB/375MB eabd8714fec9 Downloading [========> ] 65.37MB/375MB 10ac4908093d Extracting [=============================> ] 17.69MB/30.43MB 6cf350721225 Extracting [=========================================> ] 82.44MB/98.32MB f270a5fd7930 Extracting [===============================> ] 100.8MB/159.1MB f3a82e9f1761 Downloading [=========> ] 8.641MB/44.41MB f3a82e9f1761 Downloading [=========> ] 8.641MB/44.41MB eca0188f477e Extracting [============> ] 9.044MB/37.17MB eca0188f477e Extracting [============> ] 9.044MB/37.17MB 9c266ba63f51 Downloading [==========================================> ] 934B/1.105kB 9c266ba63f51 Downloading [==========================================> ] 934B/1.105kB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB a40760cd2625 Extracting [=========================================> ] 69.63MB/84.46MB 1fe734c5fee3 Extracting [==================================================>] 32.94MB/32.94MB eabd8714fec9 Downloading [=========> ] 72.35MB/375MB ad1782e4d1ef Extracting [=============================================> ] 164.9MB/180.4MB f836d47fdc4d Extracting [=======> ] 16.71MB/107.3MB eabd8714fec9 Downloading [=========> ] 72.35MB/375MB 10ac4908093d Extracting [================================> ] 19.99MB/30.43MB 6cf350721225 Extracting [============================================> ] 86.9MB/98.32MB f270a5fd7930 Extracting [================================> ] 104.7MB/159.1MB f3a82e9f1761 Downloading [===================> ] 17.27MB/44.41MB f3a82e9f1761 Downloading [===================> ] 17.27MB/44.41MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 2e8a7df9c2ee Download complete eca0188f477e Extracting [==============> ] 11.01MB/37.17MB eca0188f477e Extracting [==============> ] 11.01MB/37.17MB a40760cd2625 Extracting [===========================================> ] 74.09MB/84.46MB eabd8714fec9 Downloading [==========> ] 80.42MB/375MB eabd8714fec9 Downloading [==========> ] 80.42MB/375MB ad1782e4d1ef Extracting [==============================================> ] 166MB/180.4MB 10ac4908093d Extracting [=====================================> ] 22.61MB/30.43MB f836d47fdc4d Extracting [========> ] 17.27MB/107.3MB 6cf350721225 Extracting [===============================================> ] 92.47MB/98.32MB f270a5fd7930 Extracting [===================================> ] 111.4MB/159.1MB 1fe734c5fee3 Pull complete c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB f3a82e9f1761 Downloading [=============================> ] 26.35MB/44.41MB f3a82e9f1761 Downloading [=============================> ] 26.35MB/44.41MB c8e6f0452a8e Extracting [==================================================>] 1.076kB/1.076kB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB eca0188f477e Extracting [=================> ] 12.98MB/37.17MB eca0188f477e Extracting [=================> ] 12.98MB/37.17MB a40760cd2625 Extracting [==============================================> ] 78.54MB/84.46MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 10f05dd8b1db Download complete eabd8714fec9 Downloading [============> ] 90.61MB/375MB eabd8714fec9 Downloading [============> ] 90.61MB/375MB f270a5fd7930 Extracting [====================================> ] 117.5MB/159.1MB ad1782e4d1ef Extracting [==============================================> ] 169.3MB/180.4MB 6cf350721225 Extracting [=================================================> ] 96.93MB/98.32MB 10ac4908093d Extracting [========================================> ] 24.58MB/30.43MB f3a82e9f1761 Downloading [=======================================> ] 34.99MB/44.41MB f3a82e9f1761 Downloading [=======================================> ] 34.99MB/44.41MB 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 6cf350721225 Extracting [==================================================>] 98.32MB/98.32MB f836d47fdc4d Extracting [========> ] 17.83MB/107.3MB a40760cd2625 Extracting [=================================================> ] 84.12MB/84.46MB a40760cd2625 Extracting [==================================================>] 84.46MB/84.46MB eabd8714fec9 Downloading [=============> ] 98.65MB/375MB eabd8714fec9 Downloading [=============> ] 98.65MB/375MB 6cf350721225 Pull complete f270a5fd7930 Extracting [======================================> ] 121.4MB/159.1MB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB de723b4c7ed9 Extracting [==================================================>] 1.297kB/1.297kB f3a82e9f1761 Downloading [==============================================> ] 41.4MB/44.41MB f3a82e9f1761 Downloading [==============================================> ] 41.4MB/44.41MB a40760cd2625 Pull complete 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB ad1782e4d1ef Extracting [===============================================> ] 171MB/180.4MB 114f99593bd8 Extracting [==================================================>] 1.119kB/1.119kB c8e6f0452a8e Pull complete f3a82e9f1761 Verifying Checksum f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete f3a82e9f1761 Download complete eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 71a9f6a9ab4d Downloading [> ] 3.67kB/230.6kB 71a9f6a9ab4d Downloading [> ] 3.67kB/230.6kB 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB f836d47fdc4d Extracting [========> ] 18.94MB/107.3MB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 0143f8517101 Extracting [==================================================>] 5.324kB/5.324kB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB eabd8714fec9 Downloading [==============> ] 107.8MB/375MB eabd8714fec9 Downloading [==============> ] 107.8MB/375MB 10ac4908093d Extracting [===========================================> ] 26.54MB/30.43MB f270a5fd7930 Extracting [=======================================> ] 124.8MB/159.1MB ad1782e4d1ef Extracting [===============================================> ] 172.1MB/180.4MB eca0188f477e Extracting [=============================> ] 21.63MB/37.17MB eca0188f477e Extracting [=============================> ] 21.63MB/37.17MB f836d47fdc4d Extracting [==========> ] 22.28MB/107.3MB c81b87c3efcc Downloading [> ] 540.1kB/127.4MB eabd8714fec9 Downloading [================> ] 121.2MB/375MB de723b4c7ed9 Pull complete eabd8714fec9 Downloading [================> ] 121.2MB/375MB 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 10ac4908093d Extracting [==============================================> ] 28.51MB/30.43MB pap Pulled 5ee96432c7eb Downloading [============> ] 934B/3.628kB 5ee96432c7eb Downloading [==================================================>] 3.628kB/3.628kB f270a5fd7930 Extracting [=========================================> ] 132MB/159.1MB 5ee96432c7eb Verifying Checksum 5ee96432c7eb Download complete 114f99593bd8 Pull complete ad1782e4d1ef Extracting [================================================> ] 173.2MB/180.4MB api Pulled eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB f836d47fdc4d Extracting [===========> ] 24.51MB/107.3MB c81b87c3efcc Downloading [===> ] 9.673MB/127.4MB 0143f8517101 Pull complete eabd8714fec9 Downloading [=================> ] 130.3MB/375MB eabd8714fec9 Downloading [=================> ] 130.3MB/375MB 6ac0e4adf315 Extracting [=============================> ] 36.77MB/62.07MB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB ee69cc1a77e2 Extracting [==================================================>] 5.312kB/5.312kB 10ac4908093d Extracting [================================================> ] 29.49MB/30.43MB f270a5fd7930 Extracting [===========================================> ] 137.6MB/159.1MB da3ed5db7103 Downloading [> ] 527.8kB/127.4MB ad1782e4d1ef Extracting [================================================> ] 174.9MB/180.4MB eca0188f477e Extracting [======================================> ] 28.31MB/37.17MB eca0188f477e Extracting [======================================> ] 28.31MB/37.17MB f836d47fdc4d Extracting [============> ] 27.3MB/107.3MB c81b87c3efcc Downloading [=======> ] 18.22MB/127.4MB eabd8714fec9 Downloading [==================> ] 141.6MB/375MB eabd8714fec9 Downloading [==================> ] 141.6MB/375MB 6ac0e4adf315 Extracting [==================================> ] 42.89MB/62.07MB f270a5fd7930 Extracting [=============================================> ] 144.8MB/159.1MB 10ac4908093d Extracting [==================================================>] 30.43MB/30.43MB ad1782e4d1ef Extracting [================================================> ] 176.6MB/180.4MB da3ed5db7103 Downloading [===> ] 8.027MB/127.4MB eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB f836d47fdc4d Extracting [==============> ] 30.64MB/107.3MB c81b87c3efcc Downloading [=======> ] 20.36MB/127.4MB eabd8714fec9 Downloading [====================> ] 151.8MB/375MB eabd8714fec9 Downloading [====================> ] 151.8MB/375MB ee69cc1a77e2 Pull complete 6ac0e4adf315 Extracting [==========================================> ] 52.36MB/62.07MB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB 81667b400b57 Extracting [==================================================>] 1.034kB/1.034kB f270a5fd7930 Extracting [===============================================> ] 150.4MB/159.1MB ad1782e4d1ef Extracting [=================================================> ] 177.7MB/180.4MB da3ed5db7103 Downloading [=====> ] 15.01MB/127.4MB 10ac4908093d Pull complete 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB 44779101e748 Extracting [==================================================>] 1.744kB/1.744kB c81b87c3efcc Downloading [==========> ] 26.24MB/127.4MB eca0188f477e Extracting [===========================================> ] 32.64MB/37.17MB eca0188f477e Extracting [===========================================> ] 32.64MB/37.17MB eabd8714fec9 Downloading [=====================> ] 163.6MB/375MB eabd8714fec9 Downloading [=====================> ] 163.6MB/375MB f836d47fdc4d Extracting [================> ] 34.54MB/107.3MB 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB f270a5fd7930 Extracting [=================================================> ] 156.5MB/159.1MB da3ed5db7103 Downloading [========> ] 20.92MB/127.4MB ad1782e4d1ef Extracting [=================================================> ] 179.4MB/180.4MB f270a5fd7930 Extracting [==================================================>] 159.1MB/159.1MB c81b87c3efcc Downloading [===========> ] 29.44MB/127.4MB eabd8714fec9 Downloading [=======================> ] 173.8MB/375MB eabd8714fec9 Downloading [=======================> ] 173.8MB/375MB 81667b400b57 Pull complete f836d47fdc4d Extracting [================> ] 36.21MB/107.3MB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB ec3b6d0cc414 Extracting [==================================================>] 1.036kB/1.036kB 44779101e748 Pull complete a721db3e3f3d Extracting [> ] 65.54kB/5.526MB f270a5fd7930 Pull complete 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 9038eaba24f8 Extracting [==================================================>] 1.153kB/1.153kB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB da3ed5db7103 Downloading [==========> ] 27.34MB/127.4MB ad1782e4d1ef Extracting [==================================================>] 180.4MB/180.4MB c81b87c3efcc Downloading [===============> ] 38.55MB/127.4MB eabd8714fec9 Downloading [========================> ] 180.7MB/375MB eabd8714fec9 Downloading [========================> ] 180.7MB/375MB f836d47fdc4d Extracting [=================> ] 37.88MB/107.3MB 6ac0e4adf315 Pull complete eca0188f477e Extracting [===============================================> ] 35.39MB/37.17MB eca0188f477e Extracting [===============================================> ] 35.39MB/37.17MB ec3b6d0cc414 Pull complete da3ed5db7103 Downloading [=============> ] 34.29MB/127.4MB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a8d3998ab21c Extracting [==================================================>] 13.9kB/13.9kB a721db3e3f3d Extracting [==> ] 262.1kB/5.526MB c81b87c3efcc Downloading [==================> ] 46.61MB/127.4MB ad1782e4d1ef Pull complete f836d47fdc4d Extracting [==================> ] 40.67MB/107.3MB bc8105c6553b Extracting [===================> ] 32.77kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB bc8105c6553b Extracting [==================================================>] 84.13kB/84.13kB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [========================> ] 185.6MB/375MB eabd8714fec9 Downloading [========================> ] 185.6MB/375MB 9038eaba24f8 Pull complete 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB 04a7796b82ca Extracting [==================================================>] 1.127kB/1.127kB da3ed5db7103 Downloading [================> ] 41.27MB/127.4MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB a721db3e3f3d Extracting [============================> ] 3.146MB/5.526MB c81b87c3efcc Downloading [=======================> ] 60.03MB/127.4MB eca0188f477e Pull complete eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B f836d47fdc4d Extracting [=====================> ] 45.12MB/107.3MB eabd8714fec9 Downloading [=========================> ] 190.4MB/375MB eabd8714fec9 Downloading [=========================> ] 190.4MB/375MB bc8105c6553b Pull complete 04a7796b82ca Pull complete 929241f867bb Extracting [==================================================>] 92B/92B da3ed5db7103 Downloading [==================> ] 46.62MB/127.4MB 929241f867bb Extracting [==================================================>] 92B/92B f3b09c502777 Extracting [==> ] 3.342MB/56.52MB simulator Pulled c81b87c3efcc Downloading [===========================> ] 71.31MB/127.4MB f836d47fdc4d Extracting [======================> ] 47.91MB/107.3MB a8d3998ab21c Pull complete 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB 89d6e2ec6372 Extracting [==================================================>] 13.79kB/13.79kB a721db3e3f3d Extracting [========================================> ] 4.522MB/5.526MB e444bcd4d577 Pull complete e444bcd4d577 Pull complete eabd8714fec9 Downloading [==========================> ] 196.3MB/375MB eabd8714fec9 Downloading [==========================> ] 196.3MB/375MB da3ed5db7103 Downloading [====================> ] 53.06MB/127.4MB c81b87c3efcc Downloading [===============================> ] 79.93MB/127.4MB f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB f836d47fdc4d Extracting [=======================> ] 51.25MB/107.3MB eabd8714fec9 Downloading [===========================> ] 202.7MB/375MB eabd8714fec9 Downloading [===========================> ] 202.7MB/375MB a721db3e3f3d Extracting [===========================================> ] 4.784MB/5.526MB 929241f867bb Pull complete 37728a7352e6 Extracting [==================================================>] 92B/92B 37728a7352e6 Extracting [==================================================>] 92B/92B da3ed5db7103 Downloading [======================> ] 56.8MB/127.4MB c81b87c3efcc Downloading [====================================> ] 93.38MB/127.4MB f3b09c502777 Extracting [========> ] 9.47MB/56.52MB f836d47fdc4d Extracting [=========================> ] 54.03MB/107.3MB a721db3e3f3d Extracting [==================================================>] 5.526MB/5.526MB 89d6e2ec6372 Pull complete 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB 80096f8bb25e Extracting [==================================================>] 2.238kB/2.238kB eabd8714fec9 Downloading [============================> ] 211.3MB/375MB eabd8714fec9 Downloading [============================> ] 211.3MB/375MB a721db3e3f3d Pull complete 37728a7352e6 Pull complete c81b87c3efcc Downloading [========================================> ] 103MB/127.4MB 1850a929b84a Extracting [==================================================>] 149B/149B 1850a929b84a Extracting [==================================================>] 149B/149B da3ed5db7103 Downloading [=======================> ] 58.96MB/127.4MB 3f40c7aa46a6 Extracting [==================================================>] 302B/302B 3f40c7aa46a6 Extracting [==================================================>] 302B/302B f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB f836d47fdc4d Extracting [===========================> ] 58.49MB/107.3MB eabd8714fec9 Downloading [============================> ] 216.7MB/375MB eabd8714fec9 Downloading [============================> ] 216.7MB/375MB 80096f8bb25e Pull complete cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB cbd359ebc87d Extracting [==================================================>] 2.23kB/2.23kB c81b87c3efcc Downloading [============================================> ] 112.7MB/127.4MB da3ed5db7103 Downloading [=========================> ] 64.85MB/127.4MB 3f40c7aa46a6 Pull complete f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB f836d47fdc4d Extracting [============================> ] 61.28MB/107.3MB eabd8714fec9 Downloading [=============================> ] 223.6MB/375MB eabd8714fec9 Downloading [=============================> ] 223.6MB/375MB 1850a929b84a Pull complete 397a918c7da3 Extracting [==================================================>] 327B/327B 397a918c7da3 Extracting [==================================================>] 327B/327B cbd359ebc87d Pull complete c81b87c3efcc Downloading [================================================> ] 123.4MB/127.4MB da3ed5db7103 Downloading [============================> ] 71.81MB/127.4MB policy-db-migrator Pulled f3b09c502777 Extracting [===============> ] 17.27MB/56.52MB f836d47fdc4d Extracting [==============================> ] 64.62MB/107.3MB c81b87c3efcc Verifying Checksum c81b87c3efcc Download complete eabd8714fec9 Downloading [==============================> ] 227.9MB/375MB eabd8714fec9 Downloading [==============================> ] 227.9MB/375MB 353af139d39e Extracting [> ] 557.1kB/246.5MB da3ed5db7103 Downloading [===============================> ] 79.33MB/127.4MB 397a918c7da3 Pull complete c955f6e31a04 Downloading [=============> ] 934B/3.446kB f836d47fdc4d Extracting [===============================> ] 67.96MB/107.3MB eabd8714fec9 Downloading [===============================> ] 237MB/375MB eabd8714fec9 Downloading [===============================> ] 237MB/375MB 353af139d39e Extracting [> ] 1.671MB/246.5MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB da3ed5db7103 Downloading [=================================> ] 86.31MB/127.4MB 806be17e856d Extracting [> ] 557.1kB/89.72MB eabd8714fec9 Downloading [================================> ] 246.2MB/375MB eabd8714fec9 Downloading [================================> ] 246.2MB/375MB f836d47fdc4d Extracting [=================================> ] 71.3MB/107.3MB 353af139d39e Extracting [==> ] 11.14MB/246.5MB f3b09c502777 Extracting [======================> ] 25.07MB/56.52MB da3ed5db7103 Downloading [=====================================> ] 95.42MB/127.4MB 806be17e856d Extracting [=> ] 3.342MB/89.72MB eabd8714fec9 Downloading [==================================> ] 255.3MB/375MB eabd8714fec9 Downloading [==================================> ] 255.3MB/375MB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete f836d47fdc4d Extracting [===================================> ] 75.2MB/107.3MB 353af139d39e Extracting [===> ] 19.5MB/246.5MB f3b09c502777 Extracting [=============================> ] 33.42MB/56.52MB da3ed5db7103 Downloading [========================================> ] 103.5MB/127.4MB 806be17e856d Extracting [===> ] 6.128MB/89.72MB eabd8714fec9 Downloading [===================================> ] 264.9MB/375MB eabd8714fec9 Downloading [===================================> ] 264.9MB/375MB f836d47fdc4d Extracting [====================================> ] 77.99MB/107.3MB 353af139d39e Extracting [=====> ] 28.41MB/246.5MB f3b09c502777 Extracting [========================================> ] 45.68MB/56.52MB da3ed5db7103 Downloading [===========================================> ] 112MB/127.4MB 806be17e856d Extracting [=====> ] 9.47MB/89.72MB eabd8714fec9 Downloading [====================================> ] 275.1MB/375MB eabd8714fec9 Downloading [====================================> ] 275.1MB/375MB f836d47fdc4d Extracting [=====================================> ] 80.22MB/107.3MB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 353af139d39e Extracting [=======> ] 35.65MB/246.5MB da3ed5db7103 Downloading [===============================================> ] 121.6MB/127.4MB eabd8714fec9 Downloading [=====================================> ] 284.8MB/375MB eabd8714fec9 Downloading [=====================================> ] 284.8MB/375MB 806be17e856d Extracting [======> ] 12.26MB/89.72MB f836d47fdc4d Extracting [======================================> ] 83.56MB/107.3MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 353af139d39e Extracting [========> ] 40.67MB/246.5MB eabd8714fec9 Downloading [======================================> ] 291.2MB/375MB eabd8714fec9 Downloading [======================================> ] 291.2MB/375MB 806be17e856d Extracting [========> ] 15.04MB/89.72MB f836d47fdc4d Extracting [=======================================> ] 85.79MB/107.3MB 353af139d39e Extracting [=========> ] 45.68MB/246.5MB eabd8714fec9 Downloading [========================================> ] 300.9MB/375MB eabd8714fec9 Downloading [========================================> ] 300.9MB/375MB f3b09c502777 Pull complete 806be17e856d Extracting [=========> ] 17.83MB/89.72MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B f836d47fdc4d Extracting [==========================================> ] 90.24MB/107.3MB 353af139d39e Extracting [=========> ] 48.46MB/246.5MB eabd8714fec9 Downloading [=========================================> ] 313.3MB/375MB eabd8714fec9 Downloading [=========================================> ] 313.3MB/375MB 806be17e856d Extracting [===========> ] 20.61MB/89.72MB f836d47fdc4d Extracting [===========================================> ] 94.14MB/107.3MB 353af139d39e Extracting [============> ] 60.72MB/246.5MB eabd8714fec9 Downloading [===========================================> ] 326.7MB/375MB eabd8714fec9 Downloading [===========================================> ] 326.7MB/375MB 806be17e856d Extracting [=============> ] 23.4MB/89.72MB 353af139d39e Extracting [==============> ] 70.75MB/246.5MB f836d47fdc4d Extracting [=============================================> ] 98.04MB/107.3MB eabd8714fec9 Downloading [=============================================> ] 339MB/375MB eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 806be17e856d Extracting [==============> ] 26.18MB/89.72MB 353af139d39e Extracting [================> ] 80.77MB/246.5MB f836d47fdc4d Extracting [===============================================> ] 101.4MB/107.3MB eabd8714fec9 Downloading [==============================================> ] 348.7MB/375MB eabd8714fec9 Downloading [==============================================> ] 348.7MB/375MB 806be17e856d Extracting [===============> ] 28.41MB/89.72MB 353af139d39e Extracting [=================> ] 88.57MB/246.5MB f836d47fdc4d Extracting [================================================> ] 103.1MB/107.3MB eabd8714fec9 Downloading [================================================> ] 360.5MB/375MB eabd8714fec9 Downloading [================================================> ] 360.5MB/375MB 353af139d39e Extracting [===================> ] 94.7MB/246.5MB 806be17e856d Extracting [================> ] 30.08MB/89.72MB 353af139d39e Extracting [===================> ] 95.81MB/246.5MB eabd8714fec9 Downloading [================================================> ] 361.6MB/375MB eabd8714fec9 Downloading [================================================> ] 361.6MB/375MB f836d47fdc4d Extracting [================================================> ] 103.6MB/107.3MB 353af139d39e Extracting [====================> ] 103.1MB/246.5MB 806be17e856d Extracting [==================> ] 32.31MB/89.72MB eabd8714fec9 Downloading [=================================================> ] 371.2MB/375MB eabd8714fec9 Downloading [=================================================> ] 371.2MB/375MB f836d47fdc4d Extracting [================================================> ] 104.7MB/107.3MB eabd8714fec9 Verifying Checksum eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete eabd8714fec9 Download complete 353af139d39e Extracting [======================> ] 110.3MB/246.5MB 806be17e856d Extracting [===================> ] 35.09MB/89.72MB f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB eabd8714fec9 Extracting [> ] 557.1kB/375MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 353af139d39e Extracting [========================> ] 120.9MB/246.5MB 806be17e856d Extracting [====================> ] 37.32MB/89.72MB 353af139d39e Extracting [==========================> ] 130.4MB/246.5MB eabd8714fec9 Extracting [=> ] 12.26MB/375MB eabd8714fec9 Extracting [=> ] 12.26MB/375MB 806be17e856d Extracting [=====================> ] 38.44MB/89.72MB 353af139d39e Extracting [============================> ] 139.8MB/246.5MB eabd8714fec9 Extracting [==> ] 18.94MB/375MB eabd8714fec9 Extracting [==> ] 18.94MB/375MB 806be17e856d Extracting [======================> ] 41.22MB/89.72MB 353af139d39e Extracting [==============================> ] 152.6MB/246.5MB 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB f836d47fdc4d Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 806be17e856d Extracting [========================> ] 43.45MB/89.72MB eabd8714fec9 Extracting [==> ] 22.28MB/375MB eabd8714fec9 Extracting [==> ] 22.28MB/375MB 353af139d39e Extracting [===============================> ] 156MB/246.5MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB 806be17e856d Extracting [=========================> ] 46.24MB/89.72MB 353af139d39e Extracting [================================> ] 162.1MB/246.5MB 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB eabd8714fec9 Extracting [===> ] 29.52MB/375MB eabd8714fec9 Extracting [===> ] 29.52MB/375MB 353af139d39e Extracting [==================================> ] 171.6MB/246.5MB 806be17e856d Extracting [===========================> ] 49.02MB/89.72MB eabd8714fec9 Extracting [=====> ] 38.44MB/375MB eabd8714fec9 Extracting [=====> ] 38.44MB/375MB 353af139d39e Extracting [====================================> ] 181.6MB/246.5MB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [=====> ] 42.89MB/375MB eabd8714fec9 Extracting [=====> ] 42.89MB/375MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 806be17e856d Extracting [=============================> ] 53.48MB/89.72MB 353af139d39e Extracting [=====================================> ] 184.9MB/246.5MB 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB eabd8714fec9 Extracting [======> ] 49.58MB/375MB eabd8714fec9 Extracting [======> ] 49.58MB/375MB 806be17e856d Extracting [================================> ] 57.93MB/89.72MB 353af139d39e Extracting [======================================> ] 191.6MB/246.5MB eabd8714fec9 Extracting [=======> ] 56.82MB/375MB eabd8714fec9 Extracting [=======> ] 56.82MB/375MB 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB eabd8714fec9 Extracting [=======> ] 58.49MB/375MB eabd8714fec9 Extracting [=======> ] 58.49MB/375MB 353af139d39e Extracting [========================================> ] 198.9MB/246.5MB bf70c5107ab5 Pull complete 8b5292c940e1 Extracting [==> ] 3.342MB/63.48MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 806be17e856d Extracting [=================================> ] 60.72MB/89.72MB 353af139d39e Extracting [=========================================> ] 205.6MB/246.5MB eabd8714fec9 Extracting [========> ] 64.62MB/375MB eabd8714fec9 Extracting [========> ] 64.62MB/375MB 806be17e856d Extracting [====================================> ] 65.18MB/89.72MB 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Extracting [=========> ] 74.09MB/375MB eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 353af139d39e Extracting [===========================================> ] 215MB/246.5MB 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB 806be17e856d Extracting [=====================================> ] 67.96MB/89.72MB eabd8714fec9 Extracting [==========> ] 82.44MB/375MB 353af139d39e Extracting [=============================================> ] 222.8MB/246.5MB eabd8714fec9 Extracting [==========> ] 82.44MB/375MB eabd8714fec9 Extracting [============> ] 91.36MB/375MB eabd8714fec9 Extracting [============> ] 91.36MB/375MB 353af139d39e Extracting [===============================================> ] 232.3MB/246.5MB 353af139d39e Extracting [=================================================> ] 245.7MB/246.5MB 353af139d39e Extracting [==================================================>] 246.5MB/246.5MB 806be17e856d Extracting [======================================> ] 69.07MB/89.72MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 1ccde423731d Pull complete 353af139d39e Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B apex-pdp Pulled 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB 806be17e856d Extracting [========================================> ] 71.86MB/89.72MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 806be17e856d Extracting [========================================> ] 73.53MB/89.72MB 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB eabd8714fec9 Extracting [==============> ] 109.2MB/375MB eabd8714fec9 Extracting [==============> ] 109.2MB/375MB 7df673c7455d Pull complete prometheus Pulled 806be17e856d Extracting [===========================================> ] 77.99MB/89.72MB 8b5292c940e1 Extracting [=========> ] 11.7MB/63.48MB eabd8714fec9 Extracting [===============> ] 113.1MB/375MB eabd8714fec9 Extracting [===============> ] 113.1MB/375MB 8b5292c940e1 Extracting [===========> ] 14.48MB/63.48MB 806be17e856d Extracting [==============================================> ] 83MB/89.72MB eabd8714fec9 Extracting [===============> ] 118.1MB/375MB eabd8714fec9 Extracting [===============> ] 118.1MB/375MB eabd8714fec9 Extracting [================> ] 123.7MB/375MB eabd8714fec9 Extracting [================> ] 123.7MB/375MB 806be17e856d Extracting [===============================================> ] 85.23MB/89.72MB 8b5292c940e1 Extracting [=============> ] 16.71MB/63.48MB eabd8714fec9 Extracting [=================> ] 128.7MB/375MB eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 806be17e856d Extracting [================================================> ] 86.9MB/89.72MB 8b5292c940e1 Extracting [===============> ] 19.5MB/63.48MB eabd8714fec9 Extracting [=================> ] 133.1MB/375MB eabd8714fec9 Extracting [=================> ] 133.1MB/375MB 8b5292c940e1 Extracting [=================> ] 22.28MB/63.48MB 806be17e856d Extracting [=================================================> ] 89.13MB/89.72MB eabd8714fec9 Extracting [==================> ] 137MB/375MB eabd8714fec9 Extracting [==================> ] 137MB/375MB 806be17e856d Extracting [==================================================>] 89.72MB/89.72MB 8b5292c940e1 Extracting [===================> ] 25.07MB/63.48MB eabd8714fec9 Extracting [==================> ] 141.5MB/375MB eabd8714fec9 Extracting [==================> ] 141.5MB/375MB 8b5292c940e1 Extracting [=======================> ] 29.52MB/63.48MB eabd8714fec9 Extracting [===================> ] 147.1MB/375MB eabd8714fec9 Extracting [===================> ] 147.1MB/375MB eabd8714fec9 Extracting [===================> ] 148.2MB/375MB eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 806be17e856d Pull complete 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 634de6c90876 Extracting [==================================================>] 3.49kB/3.49kB 8b5292c940e1 Extracting [=========================> ] 32.31MB/63.48MB eabd8714fec9 Extracting [====================> ] 152.1MB/375MB eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 8b5292c940e1 Extracting [============================> ] 35.65MB/63.48MB 634de6c90876 Pull complete cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB cd00854cfb1a Extracting [==================================================>] 6.971kB/6.971kB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 8b5292c940e1 Extracting [==============================> ] 38.44MB/63.48MB eabd8714fec9 Extracting [=====================> ] 161MB/375MB eabd8714fec9 Extracting [=====================> ] 161MB/375MB 8b5292c940e1 Extracting [=================================> ] 42.34MB/63.48MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB 8b5292c940e1 Extracting [====================================> ] 46.79MB/63.48MB eabd8714fec9 Extracting [========================> ] 180.5MB/375MB eabd8714fec9 Extracting [========================> ] 180.5MB/375MB 8b5292c940e1 Extracting [=======================================> ] 50.14MB/63.48MB eabd8714fec9 Extracting [=========================> ] 193.3MB/375MB eabd8714fec9 Extracting [=========================> ] 193.3MB/375MB 8b5292c940e1 Extracting [=========================================> ] 52.92MB/63.48MB eabd8714fec9 Extracting [===========================> ] 206.7MB/375MB eabd8714fec9 Extracting [===========================> ] 206.7MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [==============================> ] 225.6MB/375MB eabd8714fec9 Extracting [==============================> ] 225.6MB/375MB eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB eabd8714fec9 Extracting [================================> ] 244.5MB/375MB eabd8714fec9 Extracting [================================> ] 244.5MB/375MB eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB eabd8714fec9 Extracting [====================================> ] 273MB/375MB eabd8714fec9 Extracting [====================================> ] 273MB/375MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB cd00854cfb1a Pull complete eabd8714fec9 Extracting [======================================> ] 285.8MB/375MB eabd8714fec9 Extracting [======================================> ] 285.8MB/375MB eabd8714fec9 Extracting [======================================> ] 288.6MB/375MB eabd8714fec9 Extracting [======================================> ] 288.6MB/375MB eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 8b5292c940e1 Pull complete eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB mariadb Pulled eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 454a4350d439 Pull complete eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 9a8c18aee5ea Pull complete eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [================================================> ] 361.5MB/375MB eabd8714fec9 Extracting [================================================> ] 361.5MB/375MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB grafana Pulled eabd8714fec9 Pull complete eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Pull complete 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 8f10199ed94b Extracting [================================> ] 5.702MB/8.768MB 8f10199ed94b Extracting [================================> ] 5.702MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Pull complete f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB f3a82e9f1761 Extracting [====================================> ] 32.57MB/44.41MB f3a82e9f1761 Extracting [====================================> ] 32.57MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB c81b87c3efcc Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [======> ] 16.15MB/127.4MB c81b87c3efcc Extracting [====> ] 12.26MB/127.4MB da3ed5db7103 Extracting [============> ] 31.75MB/127.4MB c81b87c3efcc Extracting [==========> ] 26.18MB/127.4MB da3ed5db7103 Extracting [==================> ] 46.79MB/127.4MB c81b87c3efcc Extracting [================> ] 41.22MB/127.4MB da3ed5db7103 Extracting [=========================> ] 64.06MB/127.4MB c81b87c3efcc Extracting [======================> ] 57.93MB/127.4MB da3ed5db7103 Extracting [===============================> ] 79.66MB/127.4MB c81b87c3efcc Extracting [=============================> ] 74.09MB/127.4MB da3ed5db7103 Extracting [======================================> ] 96.93MB/127.4MB c81b87c3efcc Extracting [====================================> ] 91.91MB/127.4MB da3ed5db7103 Extracting [============================================> ] 114.2MB/127.4MB c81b87c3efcc Extracting [==========================================> ] 108.6MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB c81b87c3efcc Extracting [==============================================> ] 119.8MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB c81b87c3efcc Extracting [=================================================> ] 125.3MB/127.4MB c81b87c3efcc Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c81b87c3efcc Pull complete 5ee96432c7eb Extracting [==================================================>] 3.628kB/3.628kB 5ee96432c7eb Extracting [==================================================>] 3.628kB/3.628kB c955f6e31a04 Pull complete 5ee96432c7eb Pull complete zookeeper Pulled kafka Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container simulator Creating Container mariadb Creating Container prometheus Creating Container mariadb Created Container simulator Created Container policy-db-migrator Creating Container prometheus Created Container grafana Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-apex-pdp Creating Container policy-apex-pdp Created Container simulator Starting Container mariadb Starting Container prometheus Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container kafka Started Container mariadb Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container prometheus Started Container grafana Starting Container simulator Started Container policy-pap Started Container policy-apex-pdp Starting Container policy-apex-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 11 seconds policy-pap Up 12 seconds policy-api Up 14 seconds kafka Up 16 seconds grafana Up 10 seconds mariadb Up 16 seconds zookeeper Up 17 seconds simulator Up 13 seconds prometheus Up 14 seconds NAMES STATUS policy-apex-pdp Up 16 seconds policy-pap Up 17 seconds policy-api Up 19 seconds kafka Up 21 seconds grafana Up 15 seconds mariadb Up 21 seconds zookeeper Up 22 seconds simulator Up 18 seconds prometheus Up 19 seconds NAMES STATUS policy-apex-pdp Up 21 seconds policy-pap Up 22 seconds policy-api Up 25 seconds kafka Up 26 seconds grafana Up 20 seconds mariadb Up 26 seconds zookeeper Up 27 seconds simulator Up 23 seconds prometheus Up 24 seconds NAMES STATUS policy-apex-pdp Up 26 seconds policy-pap Up 27 seconds policy-api Up 30 seconds kafka Up 31 seconds grafana Up 25 seconds mariadb Up 31 seconds zookeeper Up 32 seconds simulator Up 28 seconds prometheus Up 29 seconds NAMES STATUS policy-apex-pdp Up 31 seconds policy-pap Up 32 seconds policy-api Up 35 seconds kafka Up 36 seconds grafana Up 30 seconds mariadb Up 36 seconds zookeeper Up 37 seconds simulator Up 33 seconds prometheus Up 34 seconds Build docker image for robot framework Error: No such image: policy-csit-robot Cloning into '/w/workspace/policy-pap-newdelhi-project-csit-pap/csit/resources/tests/models'... Build robot framework docker image Sending build context to Docker daemon 16MB Step 1/9 : FROM nexus3.onap.org:10001/library/python:3.10-slim-bullseye 3.10-slim-bullseye: Pulling from library/python e1f16b66c2e8: Pulling fs layer 023041bc400d: Pulling fs layer e2b6e24646ef: Pulling fs layer 2e8c448fc85b: Pulling fs layer 2e8c448fc85b: Waiting 023041bc400d: Download complete 2e8c448fc85b: Download complete e2b6e24646ef: Verifying Checksum e2b6e24646ef: Download complete e1f16b66c2e8: Verifying Checksum e1f16b66c2e8: Download complete e1f16b66c2e8: Pull complete 023041bc400d: Pull complete e2b6e24646ef: Pull complete 2e8c448fc85b: Pull complete Digest: sha256:dd4c0e03b5887369da59ac8f97f2697baf7c33c5c7659d274297e9514d40b68c Status: Downloaded newer image for nexus3.onap.org:10001/library/python:3.10-slim-bullseye ---> db29290af7bb Step 2/9 : ARG CSIT_SCRIPT=${CSIT_SCRIPT} ---> Running in 152e5592c5f2 Removing intermediate container 152e5592c5f2 ---> c23c50063e17 Step 3/9 : ARG ROBOT_FILE=${ROBOT_FILE} ---> Running in b05b84a117a7 Removing intermediate container b05b84a117a7 ---> 3af02800b3e6 Step 4/9 : ENV ROBOT_WORKSPACE=/opt/robotworkspace ROBOT_FILE=$ROBOT_FILE CLAMP_K8S_TEST=$CLAMP_K8S_TEST ---> Running in 7cb5d9590924 Removing intermediate container 7cb5d9590924 ---> d54fa7e5bda8 Step 5/9 : RUN python3 -m pip -qq install --upgrade pip && python3 -m pip -qq install --upgrade --extra-index-url="https://nexus3.onap.org/repository/PyPi.staging/simple" 'robotframework-onap==0.6.0.*' --pre && python3 -m pip -qq install --upgrade confluent-kafka && python3 -m pip freeze ---> Running in 28763cafdafc bcrypt==4.3.0 certifi==2025.4.26 cffi==1.17.1 charset-normalizer==3.4.2 confluent-kafka==2.10.0 cryptography==45.0.3 decorator==5.2.1 deepdiff==8.5.0 dnspython==2.7.0 future==1.0.0 idna==3.10 Jinja2==3.1.6 jsonpath-rw==1.4.0 kafka-python==2.2.11 MarkupSafe==3.0.2 more-itertools==5.0.0 orderly-set==5.4.1 paramiko==3.5.1 pbr==6.1.1 ply==3.11 protobuf==6.31.1 pycparser==2.22 PyNaCl==1.5.0 PyYAML==6.0.2 requests==2.32.3 robotframework==7.3 robotframework-onap==0.6.0.dev105 robotframework-requests==1.0a14 robotlibcore-temp==1.0.2 six==1.17.0 urllib3==2.4.0 Removing intermediate container 28763cafdafc ---> 2c87724de812 Step 6/9 : RUN mkdir -p ${ROBOT_WORKSPACE} ---> Running in 5c83f4283aa5 Removing intermediate container 5c83f4283aa5 ---> 39dda2517df1 Step 7/9 : COPY scripts/run-test.sh tests/ ${ROBOT_WORKSPACE}/ ---> 0a155cd1f321 Step 8/9 : WORKDIR ${ROBOT_WORKSPACE} ---> Running in 09dd6317140b Removing intermediate container 09dd6317140b ---> ce123717817b Step 9/9 : CMD ["sh", "-c", "./run-test.sh" ] ---> Running in 46b63bc18261 Removing intermediate container 46b63bc18261 ---> fcddddcc00db Successfully built fcddddcc00db Successfully tagged policy-csit-robot:latest top - 17:02:33 up 3 min, 0 users, load average: 3.14, 1.51, 0.59 Tasks: 211 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 16.7 us, 4.1 sy, 0.0 ni, 74.8 id, 4.3 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 3.0G 21G 1.3M 6.7G 27G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up 57 seconds policy-pap Up 58 seconds policy-api Up About a minute kafka Up About a minute grafana Up 56 seconds mariadb Up About a minute zookeeper Up About a minute simulator Up 59 seconds prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 7d512310b704 policy-apex-pdp 0.63% 172MiB / 31.41GiB 0.53% 24.6kB / 27kB 0B / 0B 48 804aae987fc5 policy-pap 3.55% 539.4MiB / 31.41GiB 1.68% 105kB / 99kB 0B / 149MB 64 c417a1377e46 policy-api 0.10% 754.6MiB / 31.41GiB 2.35% 988kB / 647kB 0B / 0B 53 577edd828052 kafka 3.97% 385.3MiB / 31.41GiB 1.20% 122kB / 123kB 0B / 528kB 87 0e9ba037c5d8 grafana 0.12% 101.2MiB / 31.41GiB 0.31% 19.3MB / 191kB 0B / 30.7MB 23 405629b33a71 mariadb 0.03% 102MiB / 31.41GiB 0.32% 970kB / 1.22MB 11MB / 71.6MB 40 a93adaf124b7 zookeeper 0.07% 87.42MiB / 31.41GiB 0.27% 57.7kB / 50kB 131kB / 381kB 61 29f3b0bdc413 simulator 0.17% 122.5MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 77 3509422088e7 prometheus 0.00% 19.37MiB / 31.41GiB 0.06% 1.56kB / 474B 0B / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes mariadb Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes Shut down started! Collecting logs from docker compose containers... ======== Logs from grafana ======== grafana | logger=settings t=2025-06-07T17:01:37.741131792Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-07T17:01:37Z grafana | logger=settings t=2025-06-07T17:01:37.741519266Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-07T17:01:37.741559308Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-07T17:01:37.74158608Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-07T17:01:37.741609821Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-07T17:01:37.741650814Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-07T17:01:37.741687616Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-07T17:01:37.741713738Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-07T17:01:37.741741899Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-07T17:01:37.741808343Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-07T17:01:37.741832605Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-07T17:01:37.741857326Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-07T17:01:37.741905389Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-07T17:01:37.741934201Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-07T17:01:37.741962273Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-07T17:01:37.741990335Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-07T17:01:37.742022937Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-07T17:01:37.742051458Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-07T17:01:37.742103762Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-07T17:01:37.742459073Z level=info msg=FeatureToggles onPremToCloudMigrations=true cloudWatchNewLabelParsing=true lokiQuerySplitting=true preinstallAutoUpdate=true pinNavItems=true logsInfiniteScrolling=true newPDFRendering=true prometheusUsesCombobox=true newFiltersUI=true groupToNestedTableTransformation=true logsPanelControls=true alertingQueryAndExpressionsStepMode=true dashboardSceneSolo=true alertingRulePermanentlyDelete=true azureMonitorEnableUserAuth=true logsExploreTableVisualisation=true lokiStructuredMetadata=true angularDeprecationUI=true alertRuleRestore=true nestedFolders=true alertingApiServer=true newDashboardSharingComponent=true cloudWatchCrossAccountQuerying=true failWrongDSUID=true influxdbBackendMigration=true transformationsRedesign=true kubernetesPlaylists=true dashgpt=true externalCorePlugins=true grafanaconThemes=true logsContextDatasourceUi=true alertingSimplifiedRouting=true annotationPermissionUpdate=true dashboardSceneForViewers=true formatString=true addFieldFromCalculationStatFunctions=true recordedQueriesMulti=true dashboardScene=true tlsMemcached=true alertingInsights=true azureMonitorPrometheusExemplars=true alertingUIOptimizeReducer=true lokiQueryHints=true unifiedStorageSearchPermissionFiltering=true pluginsDetailsRightPanel=true reportingUseRawTimeRange=true alertingRuleVersionHistoryRestore=true correlations=true ssoSettingsApi=true prometheusAzureOverrideAudience=true awsAsyncQueryCaching=true publicDashboardsScene=true lokiLabelNamesQueryApi=true recoveryThreshold=true alertingRuleRecoverDeleted=true dataplaneFrontendFallback=true ssoSettingsSAML=true kubernetesClientDashboardsFolders=true alertingNotificationsStepMode=true panelMonitoring=true promQLScope=true unifiedRequestLog=true logRowsPopoverMenu=true useSessionStorageForRedirection=true cloudWatchRoundUpEndTime=true grafana | logger=sqlstore t=2025-06-07T17:01:37.742576711Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-07T17:01:37.742631444Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-07T17:01:37.744246194Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-07T17:01:37.744284836Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-07T17:01:37.745026562Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-07T17:01:37.745926098Z level=info msg="Migration successfully executed" id="create migration_log table" duration=899.346µs grafana | logger=migrator t=2025-06-07T17:01:37.749492488Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-07T17:01:37.750094695Z level=info msg="Migration successfully executed" id="create user table" duration=601.607µs grafana | logger=migrator t=2025-06-07T17:01:37.754975057Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-07T17:01:37.755563343Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=587.796µs grafana | logger=migrator t=2025-06-07T17:01:37.758687896Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-07T17:01:37.75925501Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=565.374µs grafana | logger=migrator t=2025-06-07T17:01:37.762335251Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-07T17:01:37.762867324Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=531.904µs grafana | logger=migrator t=2025-06-07T17:01:37.767031001Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-07T17:01:37.767533912Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=502.611µs grafana | logger=migrator t=2025-06-07T17:01:37.773641799Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-07T17:01:37.775457011Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.814632ms grafana | logger=migrator t=2025-06-07T17:01:37.778530081Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-07T17:01:37.77917534Z level=info msg="Migration successfully executed" id="create user table v2" duration=645.049µs grafana | logger=migrator t=2025-06-07T17:01:37.783512609Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-07T17:01:37.784117946Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=604.918µs grafana | logger=migrator t=2025-06-07T17:01:37.787363036Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-07T17:01:37.787923881Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=560.515µs grafana | logger=migrator t=2025-06-07T17:01:37.791214984Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:37.791524393Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=309.009µs grafana | logger=migrator t=2025-06-07T17:01:37.794351008Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-07T17:01:37.794758583Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=407.205µs grafana | logger=migrator t=2025-06-07T17:01:37.799266862Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-07T17:01:37.800125794Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=858.382µs grafana | logger=migrator t=2025-06-07T17:01:37.802232625Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-07T17:01:37.802303439Z level=info msg="Migration successfully executed" id="Update user table charset" duration=51.223µs grafana | logger=migrator t=2025-06-07T17:01:37.805008416Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-07T17:01:37.805817576Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=806.32µs grafana | logger=migrator t=2025-06-07T17:01:37.808739616Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-07T17:01:37.808954349Z level=info msg="Migration successfully executed" id="Add missing user data" duration=213.693µs grafana | logger=migrator t=2025-06-07T17:01:37.813573785Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-07T17:01:37.814461079Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=886.884µs grafana | logger=migrator t=2025-06-07T17:01:37.840831697Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-07T17:01:37.841396453Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=564.226µs grafana | logger=migrator t=2025-06-07T17:01:37.844377077Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-07T17:01:37.845220939Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=843.012µs grafana | logger=migrator t=2025-06-07T17:01:37.849219115Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-07T17:01:37.854893476Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=5.67384ms grafana | logger=migrator t=2025-06-07T17:01:37.857969296Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-07T17:01:37.858815178Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=842.922µs grafana | logger=migrator t=2025-06-07T17:01:37.861939871Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-07T17:01:37.862141103Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=200.562µs grafana | logger=migrator t=2025-06-07T17:01:37.864957247Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-07T17:01:37.865574565Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=616.738µs grafana | logger=migrator t=2025-06-07T17:01:37.869983828Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-07T17:01:37.870878333Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=894.115µs grafana | logger=migrator t=2025-06-07T17:01:37.873393878Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-07T17:01:37.873665545Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=272.997µs grafana | logger=migrator t=2025-06-07T17:01:37.876447667Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-07T17:01:37.876891104Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=442.877µs grafana | logger=migrator t=2025-06-07T17:01:37.879706648Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-07T17:01:37.880079181Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=372.073µs grafana | logger=migrator t=2025-06-07T17:01:37.883914128Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-07T17:01:37.884219727Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=322.04µs grafana | logger=migrator t=2025-06-07T17:01:37.88620553Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-07T17:01:37.886801806Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=595.806µs grafana | logger=migrator t=2025-06-07T17:01:37.889326502Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-07T17:01:37.889891407Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=564.475µs grafana | logger=migrator t=2025-06-07T17:01:37.894843783Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-07T17:01:37.895404988Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=560.114µs grafana | logger=migrator t=2025-06-07T17:01:37.898168308Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-07T17:01:37.898738513Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=568.995µs grafana | logger=migrator t=2025-06-07T17:01:37.901670974Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-07T17:01:37.902224008Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=551.934µs grafana | logger=migrator t=2025-06-07T17:01:37.906581838Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-07T17:01:37.906632311Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=50.173µs grafana | logger=migrator t=2025-06-07T17:01:37.909323577Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-07T17:01:37.90985473Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=531.202µs grafana | logger=migrator t=2025-06-07T17:01:37.912767449Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-07T17:01:37.91326926Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=501.921µs grafana | logger=migrator t=2025-06-07T17:01:37.915830778Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-07T17:01:37.91635082Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=519.942µs grafana | logger=migrator t=2025-06-07T17:01:37.920401861Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-07T17:01:37.920906532Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=504.691µs grafana | logger=migrator t=2025-06-07T17:01:37.923493992Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:37.925655136Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.160523ms grafana | logger=migrator t=2025-06-07T17:01:37.928525233Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-07T17:01:37.929163202Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=635.909µs grafana | logger=migrator t=2025-06-07T17:01:37.933429006Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-07T17:01:37.934943379Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.516865ms grafana | logger=migrator t=2025-06-07T17:01:37.938580043Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-07T17:01:37.939873333Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.29305ms grafana | logger=migrator t=2025-06-07T17:01:37.942765032Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-07T17:01:37.943613124Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=847.742µs grafana | logger=migrator t=2025-06-07T17:01:37.947879768Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-07T17:01:37.948715719Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=835.441µs grafana | logger=migrator t=2025-06-07T17:01:37.951694113Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:37.952169192Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=474.229µs grafana | logger=migrator t=2025-06-07T17:01:37.954417011Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:37.954979486Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=561.735µs grafana | logger=migrator t=2025-06-07T17:01:37.959642144Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-07T17:01:37.960419482Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=776.347µs grafana | logger=migrator t=2025-06-07T17:01:37.964021744Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-07T17:01:37.96606137Z level=info msg="Migration successfully executed" id="create star table" duration=2.041737ms grafana | logger=migrator t=2025-06-07T17:01:37.96979305Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-07T17:01:37.970661954Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=868.604µs grafana | logger=migrator t=2025-06-07T17:01:37.977036968Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-07T17:01:37.978569703Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.531905ms grafana | logger=migrator t=2025-06-07T17:01:37.981787981Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-07T17:01:37.983323706Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.534985ms grafana | logger=migrator t=2025-06-07T17:01:37.986388185Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-07T17:01:37.987896418Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.507473ms grafana | logger=migrator t=2025-06-07T17:01:37.991045783Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-07T17:01:37.992131009Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.082336ms grafana | logger=migrator t=2025-06-07T17:01:37.998067857Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-07T17:01:37.999482614Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.413926ms grafana | logger=migrator t=2025-06-07T17:01:38.003358933Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-07T17:01:38.004505524Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.152022ms grafana | logger=migrator t=2025-06-07T17:01:38.00735497Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-07T17:01:38.008309449Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=953.688µs grafana | logger=migrator t=2025-06-07T17:01:38.011085Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.011950824Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=862.584µs grafana | logger=migrator t=2025-06-07T17:01:38.018277845Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.01934405Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.065896ms grafana | logger=migrator t=2025-06-07T17:01:38.022383598Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.023340547Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=957.07µs grafana | logger=migrator t=2025-06-07T17:01:38.026246627Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-07T17:01:38.026324412Z level=info msg="Migration successfully executed" id="Update org table charset" duration=76.514µs grafana | logger=migrator t=2025-06-07T17:01:38.03068032Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-07T17:01:38.030759455Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=79.075µs grafana | logger=migrator t=2025-06-07T17:01:38.038190564Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-07T17:01:38.038642302Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=451.397µs grafana | logger=migrator t=2025-06-07T17:01:38.042120316Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-07T17:01:38.043643661Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.522865ms grafana | logger=migrator t=2025-06-07T17:01:38.047403813Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-07T17:01:38.048924357Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.521735ms grafana | logger=migrator t=2025-06-07T17:01:38.052299705Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-07T17:01:38.053281005Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=981.11µs grafana | logger=migrator t=2025-06-07T17:01:38.085852377Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-07T17:01:38.088065994Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=2.214067ms grafana | logger=migrator t=2025-06-07T17:01:38.091550129Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-07T17:01:38.093155898Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.604699ms grafana | logger=migrator t=2025-06-07T17:01:38.097514387Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-07T17:01:38.098114674Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=599.667µs grafana | logger=migrator t=2025-06-07T17:01:38.102875098Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-07T17:01:38.110932356Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.056427ms grafana | logger=migrator t=2025-06-07T17:01:38.114275852Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-07T17:01:38.114971705Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=696.042µs grafana | logger=migrator t=2025-06-07T17:01:38.118194623Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-07T17:01:38.118815983Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=620.259µs grafana | logger=migrator t=2025-06-07T17:01:38.125708588Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-07T17:01:38.126588082Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=881.574µs grafana | logger=migrator t=2025-06-07T17:01:38.131876128Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:38.132529549Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=652.031µs grafana | logger=migrator t=2025-06-07T17:01:38.135875536Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-07T17:01:38.137307724Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.430637ms grafana | logger=migrator t=2025-06-07T17:01:38.14242876Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-07T17:01:38.142458442Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=31.062µs grafana | logger=migrator t=2025-06-07T17:01:38.146109057Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-07T17:01:38.148963213Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.854356ms grafana | logger=migrator t=2025-06-07T17:01:38.15197918Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-07T17:01:38.15392645Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.94635ms grafana | logger=migrator t=2025-06-07T17:01:38.1592694Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.161173318Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.903048ms grafana | logger=migrator t=2025-06-07T17:01:38.164692345Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.165496484Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=803.809µs grafana | logger=migrator t=2025-06-07T17:01:38.168703802Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.17076782Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.063408ms grafana | logger=migrator t=2025-06-07T17:01:38.176042776Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.177443093Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.400337ms grafana | logger=migrator t=2025-06-07T17:01:38.180980541Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-07T17:01:38.182351835Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.370324ms grafana | logger=migrator t=2025-06-07T17:01:38.18550914Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-07T17:01:38.185536891Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=27.801µs grafana | logger=migrator t=2025-06-07T17:01:38.190504968Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-07T17:01:38.19053403Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.232µs grafana | logger=migrator t=2025-06-07T17:01:38.19361742Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.196943347Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.325376ms grafana | logger=migrator t=2025-06-07T17:01:38.202738274Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.204926729Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.187665ms grafana | logger=migrator t=2025-06-07T17:01:38.208161399Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.210167072Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.004943ms grafana | logger=migrator t=2025-06-07T17:01:38.215516923Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.217603892Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.086309ms grafana | logger=migrator t=2025-06-07T17:01:38.220514352Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.22081083Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=295.918µs grafana | logger=migrator t=2025-06-07T17:01:38.223790574Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:38.224835799Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.044995ms grafana | logger=migrator t=2025-06-07T17:01:38.23021159Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-07T17:01:38.231000619Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=789.169µs grafana | logger=migrator t=2025-06-07T17:01:38.235968635Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-07T17:01:38.235995748Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.233µs grafana | logger=migrator t=2025-06-07T17:01:38.240349696Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-07T17:01:38.241233741Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=883.204µs grafana | logger=migrator t=2025-06-07T17:01:38.248813569Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-07T17:01:38.250347944Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.533645ms grafana | logger=migrator t=2025-06-07T17:01:38.25401644Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:38.259758835Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.744675ms grafana | logger=migrator t=2025-06-07T17:01:38.270580213Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-07T17:01:38.27245917Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.853075ms grafana | logger=migrator t=2025-06-07T17:01:38.27733914Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-07T17:01:38.277956358Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=617.358µs grafana | logger=migrator t=2025-06-07T17:01:38.282481228Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-07T17:01:38.283755967Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.273879ms grafana | logger=migrator t=2025-06-07T17:01:38.287105304Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:38.287439774Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=334.15µs grafana | logger=migrator t=2025-06-07T17:01:38.292287544Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:38.292841168Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=553.344µs grafana | logger=migrator t=2025-06-07T17:01:38.296688465Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-07T17:01:38.301433338Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.743433ms grafana | logger=migrator t=2025-06-07T17:01:38.30487321Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-07T17:01:38.305637247Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=764.107µs grafana | logger=migrator t=2025-06-07T17:01:38.337545198Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-07T17:01:38.338121993Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=576.085µs grafana | logger=migrator t=2025-06-07T17:01:38.345252314Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-07T17:01:38.345527441Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=274.797µs grafana | logger=migrator t=2025-06-07T17:01:38.348714958Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-07T17:01:38.349580451Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=865.163µs grafana | logger=migrator t=2025-06-07T17:01:38.352546144Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.354825705Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.282831ms grafana | logger=migrator t=2025-06-07T17:01:38.359344284Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.361606163Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.261459ms grafana | logger=migrator t=2025-06-07T17:01:38.36462891Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-07T17:01:38.365535656Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=932.738µs grafana | logger=migrator t=2025-06-07T17:01:38.368848331Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-07T17:01:38.371053427Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.204516ms grafana | logger=migrator t=2025-06-07T17:01:38.375094546Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-07T17:01:38.377217438Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.119891ms grafana | logger=migrator t=2025-06-07T17:01:38.381291159Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-07T17:01:38.381731657Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=439.068µs grafana | logger=migrator t=2025-06-07T17:01:38.38487261Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-07T17:01:38.38715022Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.277ms grafana | logger=migrator t=2025-06-07T17:01:38.389982315Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-07T17:01:38.391071143Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.087978ms grafana | logger=migrator t=2025-06-07T17:01:38.395501076Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-07T17:01:38.395949984Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=448.588µs grafana | logger=migrator t=2025-06-07T17:01:38.398252396Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-07T17:01:38.399129671Z level=info msg="Migration successfully executed" id="create data_source table" duration=877.835µs grafana | logger=migrator t=2025-06-07T17:01:38.402135146Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-07T17:01:38.402944216Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=808.25µs grafana | logger=migrator t=2025-06-07T17:01:38.408214591Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-07T17:01:38.409395774Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.180353ms grafana | logger=migrator t=2025-06-07T17:01:38.414724533Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.416357054Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.635951ms grafana | logger=migrator t=2025-06-07T17:01:38.420001199Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-07T17:01:38.420717853Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=716.364µs grafana | logger=migrator t=2025-06-07T17:01:38.425306337Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-07T17:01:38.432076884Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.769347ms grafana | logger=migrator t=2025-06-07T17:01:38.434953143Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-07T17:01:38.43555737Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=603.697µs grafana | logger=migrator t=2025-06-07T17:01:38.440607922Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-07T17:01:38.441312165Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=703.463µs grafana | logger=migrator t=2025-06-07T17:01:38.444268538Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-07T17:01:38.44594163Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.671582ms grafana | logger=migrator t=2025-06-07T17:01:38.45272356Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-07T17:01:38.453244602Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=520.682µs grafana | logger=migrator t=2025-06-07T17:01:38.459156387Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-07T17:01:38.463173605Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.016949ms grafana | logger=migrator t=2025-06-07T17:01:38.467769888Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-07T17:01:38.47038381Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.613192ms grafana | logger=migrator t=2025-06-07T17:01:38.473151961Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-07T17:01:38.473181183Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=29.942µs grafana | logger=migrator t=2025-06-07T17:01:38.47798664Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-07T17:01:38.478299829Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=312.719µs grafana | logger=migrator t=2025-06-07T17:01:38.48139561Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-07T17:01:38.483918286Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.521616ms grafana | logger=migrator t=2025-06-07T17:01:38.489037472Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-07T17:01:38.489222173Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=184.491µs grafana | logger=migrator t=2025-06-07T17:01:38.494990319Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-07T17:01:38.495286388Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=296.428µs grafana | logger=migrator t=2025-06-07T17:01:38.498307824Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-07T17:01:38.501549884Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.24283ms grafana | logger=migrator t=2025-06-07T17:01:38.504438833Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-07T17:01:38.504613513Z level=info msg="Migration successfully executed" id="Update uid value" duration=176.821µs grafana | logger=migrator t=2025-06-07T17:01:38.510052449Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:38.511200911Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.150832ms grafana | logger=migrator t=2025-06-07T17:01:38.515611192Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-07T17:01:38.5165414Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=929.798µs grafana | logger=migrator t=2025-06-07T17:01:38.521103202Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-07T17:01:38.524101747Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.997965ms grafana | logger=migrator t=2025-06-07T17:01:38.528666419Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-07T17:01:38.531259469Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.5915ms grafana | logger=migrator t=2025-06-07T17:01:38.535842842Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-07T17:01:38.535860743Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=18.681µs grafana | logger=migrator t=2025-06-07T17:01:38.538444532Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-07T17:01:38.539244872Z level=info msg="Migration successfully executed" id="create api_key table" duration=800.28µs grafana | logger=migrator t=2025-06-07T17:01:38.544098081Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-07T17:01:38.544962995Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=867.974µs grafana | logger=migrator t=2025-06-07T17:01:38.548244018Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-07T17:01:38.549112541Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=868.263µs grafana | logger=migrator t=2025-06-07T17:01:38.564746427Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-07T17:01:38.5656092Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=859.072µs grafana | logger=migrator t=2025-06-07T17:01:38.572427641Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.57371659Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.288749ms grafana | logger=migrator t=2025-06-07T17:01:38.577235817Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-07T17:01:38.578812935Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.576838ms grafana | logger=migrator t=2025-06-07T17:01:38.583839896Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-07T17:01:38.584712079Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=871.763µs grafana | logger=migrator t=2025-06-07T17:01:38.587751057Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-07T17:01:38.59491633Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.163933ms grafana | logger=migrator t=2025-06-07T17:01:38.598233644Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-07T17:01:38.599227646Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=993.611µs grafana | logger=migrator t=2025-06-07T17:01:38.607677577Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-07T17:01:38.608596565Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=915.817µs grafana | logger=migrator t=2025-06-07T17:01:38.613023897Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-07T17:01:38.614363501Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.339384ms grafana | logger=migrator t=2025-06-07T17:01:38.617614471Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-07T17:01:38.618605332Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=990.421µs grafana | logger=migrator t=2025-06-07T17:01:38.623399788Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:38.623838245Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=437.687µs grafana | logger=migrator t=2025-06-07T17:01:38.626207562Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-07T17:01:38.627250806Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.036933ms grafana | logger=migrator t=2025-06-07T17:01:38.631224952Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-07T17:01:38.6313533Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=95.205µs grafana | logger=migrator t=2025-06-07T17:01:38.636373449Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-07T17:01:38.639098507Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.725188ms grafana | logger=migrator t=2025-06-07T17:01:38.642928604Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-07T17:01:38.645696865Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.767811ms grafana | logger=migrator t=2025-06-07T17:01:38.650689824Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-07T17:01:38.650966491Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=275.957µs grafana | logger=migrator t=2025-06-07T17:01:38.655234514Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-07T17:01:38.658063718Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.828064ms grafana | logger=migrator t=2025-06-07T17:01:38.661105176Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-07T17:01:38.663812393Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.706147ms grafana | logger=migrator t=2025-06-07T17:01:38.666701092Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-07T17:01:38.667570466Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=868.354µs grafana | logger=migrator t=2025-06-07T17:01:38.67071038Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-07T17:01:38.671405442Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=693.922µs grafana | logger=migrator t=2025-06-07T17:01:38.675962044Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-07T17:01:38.677207231Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.274638ms grafana | logger=migrator t=2025-06-07T17:01:38.681549069Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-07T17:01:38.682965486Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.415757ms grafana | logger=migrator t=2025-06-07T17:01:38.691531095Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-07T17:01:38.692569289Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.036453ms grafana | logger=migrator t=2025-06-07T17:01:38.696047614Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-07T17:01:38.69743981Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.391545ms grafana | logger=migrator t=2025-06-07T17:01:38.700984468Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-07T17:01:38.701001559Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.651µs grafana | logger=migrator t=2025-06-07T17:01:38.70423413Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-07T17:01:38.704257471Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=23.611µs grafana | logger=migrator t=2025-06-07T17:01:38.708529924Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-07T17:01:38.711583253Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.052749ms grafana | logger=migrator t=2025-06-07T17:01:38.714927629Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-07T17:01:38.717791766Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.856006ms grafana | logger=migrator t=2025-06-07T17:01:38.721202977Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-07T17:01:38.721219718Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=17.301µs grafana | logger=migrator t=2025-06-07T17:01:38.725971201Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-07T17:01:38.7267542Z level=info msg="Migration successfully executed" id="create quota table v1" duration=782.249µs grafana | logger=migrator t=2025-06-07T17:01:38.73339846Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-07T17:01:38.734796607Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.401277ms grafana | logger=migrator t=2025-06-07T17:01:38.738461893Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-07T17:01:38.738497395Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=36.772µs grafana | logger=migrator t=2025-06-07T17:01:38.742490901Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-07T17:01:38.743984704Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.492643ms grafana | logger=migrator t=2025-06-07T17:01:38.750201338Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.75105345Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=851.632µs grafana | logger=migrator t=2025-06-07T17:01:38.754250737Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-07T17:01:38.758592776Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.339719ms grafana | logger=migrator t=2025-06-07T17:01:38.764115277Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-07T17:01:38.764138428Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=23.311µs grafana | logger=migrator t=2025-06-07T17:01:38.771840754Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-07T17:01:38.772399458Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=557.604µs grafana | logger=migrator t=2025-06-07T17:01:38.776038203Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-07T17:01:38.787615017Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=11.577134ms grafana | logger=migrator t=2025-06-07T17:01:38.809827809Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-07T17:01:38.811119639Z level=info msg="Migration successfully executed" id="create session table" duration=1.29089ms grafana | logger=migrator t=2025-06-07T17:01:38.816090747Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-07T17:01:38.816347762Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=255.106µs grafana | logger=migrator t=2025-06-07T17:01:38.819997777Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-07T17:01:38.820119465Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=120.568µs grafana | logger=migrator t=2025-06-07T17:01:38.82506218Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-07T17:01:38.826096724Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.033014ms grafana | logger=migrator t=2025-06-07T17:01:38.834999833Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-07T17:01:38.836276093Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.236257ms grafana | logger=migrator t=2025-06-07T17:01:38.839925068Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-07T17:01:38.839983621Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=59.764µs grafana | logger=migrator t=2025-06-07T17:01:38.843591554Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-07T17:01:38.843614715Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=23.851µs grafana | logger=migrator t=2025-06-07T17:01:38.846580158Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-07T17:01:38.849745945Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.164766ms grafana | logger=migrator t=2025-06-07T17:01:38.853921122Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-07T17:01:38.857051795Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.130173ms grafana | logger=migrator t=2025-06-07T17:01:38.86052966Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-07T17:01:38.860642727Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=112.107µs grafana | logger=migrator t=2025-06-07T17:01:38.865800275Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-07T17:01:38.865915362Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=114.457µs grafana | logger=migrator t=2025-06-07T17:01:38.872786597Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-07T17:01:38.874161571Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.373744ms grafana | logger=migrator t=2025-06-07T17:01:38.877829979Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-07T17:01:38.877866801Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=37.573µs grafana | logger=migrator t=2025-06-07T17:01:38.881487064Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-07T17:01:38.885999003Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.511949ms grafana | logger=migrator t=2025-06-07T17:01:38.890375913Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-07T17:01:38.890593096Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=216.313µs grafana | logger=migrator t=2025-06-07T17:01:38.894939875Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-07T17:01:38.898405999Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.428841ms grafana | logger=migrator t=2025-06-07T17:01:38.901775767Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-07T17:01:38.905221399Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.444652ms grafana | logger=migrator t=2025-06-07T17:01:38.913582105Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-07T17:01:38.913602197Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=20.752µs grafana | logger=migrator t=2025-06-07T17:01:38.917806677Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-07T17:01:38.918896944Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.089006ms grafana | logger=migrator t=2025-06-07T17:01:38.922601383Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-07T17:01:38.924070873Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.46864ms grafana | logger=migrator t=2025-06-07T17:01:38.929294556Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-07T17:01:38.930366682Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.071646ms grafana | logger=migrator t=2025-06-07T17:01:38.934500938Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-07T17:01:38.935398933Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=897.825µs grafana | logger=migrator t=2025-06-07T17:01:38.938821834Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-07T17:01:38.939669827Z level=info msg="Migration successfully executed" id="add index alert state" duration=848.793µs grafana | logger=migrator t=2025-06-07T17:01:38.944874158Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-07T17:01:38.946310867Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.435518ms grafana | logger=migrator t=2025-06-07T17:01:38.953181351Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-07T17:01:38.954318351Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.13634ms grafana | logger=migrator t=2025-06-07T17:01:38.958075003Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-07T17:01:38.959498871Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.423328ms grafana | logger=migrator t=2025-06-07T17:01:38.963173698Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-07T17:01:38.964092824Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=918.846µs grafana | logger=migrator t=2025-06-07T17:01:38.96986357Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-07T17:01:38.979503466Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.639346ms grafana | logger=migrator t=2025-06-07T17:01:38.983479751Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-07T17:01:38.984030076Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=549.975µs grafana | logger=migrator t=2025-06-07T17:01:38.992738594Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-07T17:01:38.994198133Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.458439ms grafana | logger=migrator t=2025-06-07T17:01:39.00047189Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:39.000805452Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=332.872µs grafana | logger=migrator t=2025-06-07T17:01:39.004323078Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-07T17:01:39.004919554Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=596.176µs grafana | logger=migrator t=2025-06-07T17:01:39.008091579Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-07T17:01:39.009363978Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.271228ms grafana | logger=migrator t=2025-06-07T17:01:39.013796919Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-07T17:01:39.018429454Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.630195ms grafana | logger=migrator t=2025-06-07T17:01:39.023424021Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-07T17:01:39.027205523Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.780541ms grafana | logger=migrator t=2025-06-07T17:01:39.058400727Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-07T17:01:39.06282573Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.425042ms grafana | logger=migrator t=2025-06-07T17:01:39.066967023Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-07T17:01:39.07066487Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.696397ms grafana | logger=migrator t=2025-06-07T17:01:39.073690187Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-07T17:01:39.074585051Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=893.865µs grafana | logger=migrator t=2025-06-07T17:01:39.077724824Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-07T17:01:39.077747025Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=22.621µs grafana | logger=migrator t=2025-06-07T17:01:39.081032417Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-07T17:01:39.081072279Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=40.713µs grafana | logger=migrator t=2025-06-07T17:01:39.088107031Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-07T17:01:39.08939067Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.282049ms grafana | logger=migrator t=2025-06-07T17:01:39.094841714Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-07T17:01:39.096104543Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.264808ms grafana | logger=migrator t=2025-06-07T17:01:39.100170942Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-07T17:01:39.10129229Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.120318ms grafana | logger=migrator t=2025-06-07T17:01:39.104570392Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-07T17:01:39.105788807Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.218304ms grafana | logger=migrator t=2025-06-07T17:01:39.11089238Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-07T17:01:39.112432924Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.540274ms grafana | logger=migrator t=2025-06-07T17:01:39.115830833Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-07T17:01:39.119976357Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.145554ms grafana | logger=migrator t=2025-06-07T17:01:39.126516269Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-07T17:01:39.133084683Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.567574ms grafana | logger=migrator t=2025-06-07T17:01:39.137099019Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-07T17:01:39.13728826Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=188.622µs grafana | logger=migrator t=2025-06-07T17:01:39.139978975Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:39.140667767Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=688.462µs grafana | logger=migrator t=2025-06-07T17:01:39.145291462Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-07T17:01:39.146644845Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.356504ms grafana | logger=migrator t=2025-06-07T17:01:39.150032553Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-07T17:01:39.157202882Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=7.163719ms grafana | logger=migrator t=2025-06-07T17:01:39.160693638Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-07T17:01:39.160709668Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=16.751µs grafana | logger=migrator t=2025-06-07T17:01:39.167976984Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-07T17:01:39.169358029Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.407887ms grafana | logger=migrator t=2025-06-07T17:01:39.17410887Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-07T17:01:39.175476125Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.367135ms grafana | logger=migrator t=2025-06-07T17:01:39.179028112Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-07T17:01:39.179172681Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=143.519µs grafana | logger=migrator t=2025-06-07T17:01:39.184461786Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-07T17:01:39.185400483Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=938.217µs grafana | logger=migrator t=2025-06-07T17:01:39.189616033Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-07T17:01:39.191067171Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.450588ms grafana | logger=migrator t=2025-06-07T17:01:39.194603079Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-07T17:01:39.195746619Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.14373ms grafana | logger=migrator t=2025-06-07T17:01:39.200171671Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-07T17:01:39.201067306Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=894.914µs grafana | logger=migrator t=2025-06-07T17:01:39.207461638Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-07T17:01:39.208513243Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.051215ms grafana | logger=migrator t=2025-06-07T17:01:39.213095784Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-07T17:01:39.214523962Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.427238ms grafana | logger=migrator t=2025-06-07T17:01:39.218878819Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-07T17:01:39.218918231Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=36.983µs grafana | logger=migrator t=2025-06-07T17:01:39.222912907Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.226973316Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.057718ms grafana | logger=migrator t=2025-06-07T17:01:39.230596559Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-07T17:01:39.23144315Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=846.091µs grafana | logger=migrator t=2025-06-07T17:01:39.235811668Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.239871698Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.05854ms grafana | logger=migrator t=2025-06-07T17:01:39.24414377Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-07T17:01:39.244880016Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=735.036µs grafana | logger=migrator t=2025-06-07T17:01:39.25164155Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-07T17:01:39.253161494Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.518904ms grafana | logger=migrator t=2025-06-07T17:01:39.257862132Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-07T17:01:39.259192434Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.330252ms grafana | logger=migrator t=2025-06-07T17:01:39.263075822Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-07T17:01:39.276628194Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.556172ms grafana | logger=migrator t=2025-06-07T17:01:39.310968332Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-07T17:01:39.312137124Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.167812ms grafana | logger=migrator t=2025-06-07T17:01:39.317564718Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-07T17:01:39.319068159Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.502302ms grafana | logger=migrator t=2025-06-07T17:01:39.323999242Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-07T17:01:39.324556146Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=555.804µs grafana | logger=migrator t=2025-06-07T17:01:39.329226303Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-07T17:01:39.330144109Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=916.586µs grafana | logger=migrator t=2025-06-07T17:01:39.335451105Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-07T17:01:39.335717111Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=264.436µs grafana | logger=migrator t=2025-06-07T17:01:39.339248968Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.346139162Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.886703ms grafana | logger=migrator t=2025-06-07T17:01:39.354164334Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.358190381Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.023167ms grafana | logger=migrator t=2025-06-07T17:01:39.363002437Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.364551231Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.540594ms grafana | logger=migrator t=2025-06-07T17:01:39.36958175Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.371233542Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.651572ms grafana | logger=migrator t=2025-06-07T17:01:39.376759971Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-07T17:01:39.376976655Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=216.583µs grafana | logger=migrator t=2025-06-07T17:01:39.379728644Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-07T17:01:39.387696143Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.96527ms grafana | logger=migrator t=2025-06-07T17:01:39.395421586Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-07T17:01:39.396058916Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=636.75µs grafana | logger=migrator t=2025-06-07T17:01:39.401239223Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-07T17:01:39.40150255Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=263.277µs grafana | logger=migrator t=2025-06-07T17:01:39.40508542Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-07T17:01:39.405892419Z level=info msg="Migration successfully executed" id="Move region to single row" duration=805.349µs grafana | logger=migrator t=2025-06-07T17:01:39.409895346Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.411549547Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.653561ms grafana | logger=migrator t=2025-06-07T17:01:39.416077965Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.417044374Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=961.878µs grafana | logger=migrator t=2025-06-07T17:01:39.420263521Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.421285015Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.021154ms grafana | logger=migrator t=2025-06-07T17:01:39.427619623Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.429616736Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.996263ms grafana | logger=migrator t=2025-06-07T17:01:39.43570491Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.436544071Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=838.931µs grafana | logger=migrator t=2025-06-07T17:01:39.439328502Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-07T17:01:39.440171063Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=842.191µs grafana | logger=migrator t=2025-06-07T17:01:39.4423931Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-07T17:01:39.442414901Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=22.281µs grafana | logger=migrator t=2025-06-07T17:01:39.447859176Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-07T17:01:39.447879717Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=20.441µs grafana | logger=migrator t=2025-06-07T17:01:39.451808188Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-07T17:01:39.451838481Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=31.263µs grafana | logger=migrator t=2025-06-07T17:01:39.456010116Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-07T17:01:39.4572155Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.205073ms grafana | logger=migrator t=2025-06-07T17:01:39.460382395Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-07T17:01:39.461160362Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=777.257µs grafana | logger=migrator t=2025-06-07T17:01:39.467542274Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-07T17:01:39.469888918Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.930279ms grafana | logger=migrator t=2025-06-07T17:01:39.47463703Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-07T17:01:39.476173014Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.533743ms grafana | logger=migrator t=2025-06-07T17:01:39.479401802Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-07T17:01:39.479577782Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=174.81µs grafana | logger=migrator t=2025-06-07T17:01:39.484007465Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-07T17:01:39.484463523Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=454.197µs grafana | logger=migrator t=2025-06-07T17:01:39.488880434Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-07T17:01:39.488911206Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=31.072µs grafana | logger=migrator t=2025-06-07T17:01:39.493040629Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-07T17:01:39.502028871Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=8.985571ms grafana | logger=migrator t=2025-06-07T17:01:39.510037812Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-07T17:01:39.510639989Z level=info msg="Migration successfully executed" id="create team table" duration=601.507µs grafana | logger=migrator t=2025-06-07T17:01:39.515223471Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-07T17:01:39.516178859Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=955.258µs grafana | logger=migrator t=2025-06-07T17:01:39.519473482Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-07T17:01:39.520571109Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.096517ms grafana | logger=migrator t=2025-06-07T17:01:39.545699222Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-07T17:01:39.55349093Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.794628ms grafana | logger=migrator t=2025-06-07T17:01:39.557579261Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-07T17:01:39.557771193Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=191.462µs grafana | logger=migrator t=2025-06-07T17:01:39.561041373Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:39.562290071Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.247257ms grafana | logger=migrator t=2025-06-07T17:01:39.56700587Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-07T17:01:39.573634697Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.628777ms grafana | logger=migrator t=2025-06-07T17:01:39.576558096Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-07T17:01:39.58101515Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.456354ms grafana | logger=migrator t=2025-06-07T17:01:39.584174314Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-07T17:01:39.584951082Z level=info msg="Migration successfully executed" id="create team member table" duration=775.547µs grafana | logger=migrator t=2025-06-07T17:01:39.58964785Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-07T17:01:39.5907891Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.14288ms grafana | logger=migrator t=2025-06-07T17:01:39.595944967Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-07T17:01:39.596878484Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=933.147µs grafana | logger=migrator t=2025-06-07T17:01:39.599882668Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-07T17:01:39.601169067Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.284239ms grafana | logger=migrator t=2025-06-07T17:01:39.607439612Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-07T17:01:39.615175147Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.733785ms grafana | logger=migrator t=2025-06-07T17:01:39.618383364Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-07T17:01:39.621670226Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.281101ms grafana | logger=migrator t=2025-06-07T17:01:39.626760928Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-07T17:01:39.631303808Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.544629ms grafana | logger=migrator t=2025-06-07T17:01:39.635326744Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-07T17:01:39.636234999Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=907.365µs grafana | logger=migrator t=2025-06-07T17:01:39.640878385Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-07T17:01:39.641668493Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=790.358µs grafana | logger=migrator t=2025-06-07T17:01:39.644704249Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-07T17:01:39.645557752Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=855.093µs grafana | logger=migrator t=2025-06-07T17:01:39.648895537Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-07T17:01:39.649793692Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=897.275µs grafana | logger=migrator t=2025-06-07T17:01:39.654045383Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-07T17:01:39.654917807Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=871.344µs grafana | logger=migrator t=2025-06-07T17:01:39.658469765Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-07T17:01:39.659895563Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.425038ms grafana | logger=migrator t=2025-06-07T17:01:39.66425645Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-07T17:01:39.665668077Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.410457ms grafana | logger=migrator t=2025-06-07T17:01:39.670792262Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-07T17:01:39.672635544Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.844723ms grafana | logger=migrator t=2025-06-07T17:01:39.677564127Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-07T17:01:39.679214289Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.648851ms grafana | logger=migrator t=2025-06-07T17:01:39.683572116Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-07T17:01:39.684194734Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=621.938µs grafana | logger=migrator t=2025-06-07T17:01:39.690216634Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-07T17:01:39.690673932Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=456.308µs grafana | logger=migrator t=2025-06-07T17:01:39.693963094Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-07T17:01:39.695141006Z level=info msg="Migration successfully executed" id="create tag table" duration=1.176881ms grafana | logger=migrator t=2025-06-07T17:01:39.701525398Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-07T17:01:39.702501818Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=976.04µs grafana | logger=migrator t=2025-06-07T17:01:39.709213129Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-07T17:01:39.710074123Z level=info msg="Migration successfully executed" id="create login attempt table" duration=860.364µs grafana | logger=migrator t=2025-06-07T17:01:39.7131217Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-07T17:01:39.714175994Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.053734ms grafana | logger=migrator t=2025-06-07T17:01:39.717233462Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-07T17:01:39.718280557Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.046785ms grafana | logger=migrator t=2025-06-07T17:01:39.722636594Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:39.735841855Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.204811ms grafana | logger=migrator t=2025-06-07T17:01:39.738985347Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-07T17:01:39.739569303Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=583.346µs grafana | logger=migrator t=2025-06-07T17:01:39.744791424Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-07T17:01:39.745746932Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=955.108µs grafana | logger=migrator t=2025-06-07T17:01:39.750401418Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:39.75075996Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=357.632µs grafana | logger=migrator t=2025-06-07T17:01:39.753748684Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:39.754472769Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=722.966µs grafana | logger=migrator t=2025-06-07T17:01:39.757318973Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-07T17:01:39.758142854Z level=info msg="Migration successfully executed" id="create user auth table" duration=823.371µs grafana | logger=migrator t=2025-06-07T17:01:39.763839813Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-07T17:01:39.764833654Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=992.991µs grafana | logger=migrator t=2025-06-07T17:01:39.767735592Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-07T17:01:39.767793126Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=54.763µs grafana | logger=migrator t=2025-06-07T17:01:39.770237046Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.775559523Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.321756ms grafana | logger=migrator t=2025-06-07T17:01:39.785042614Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.790402844Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.361459ms grafana | logger=migrator t=2025-06-07T17:01:39.793226587Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.798806729Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.579202ms grafana | logger=migrator t=2025-06-07T17:01:39.802297355Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.808278452Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.980317ms grafana | logger=migrator t=2025-06-07T17:01:39.812812169Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.813893936Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.081507ms grafana | logger=migrator t=2025-06-07T17:01:39.817041709Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.822410209Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.36769ms grafana | logger=migrator t=2025-06-07T17:01:39.827778909Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-07T17:01:39.833042212Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.262473ms grafana | logger=migrator t=2025-06-07T17:01:39.836301972Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-07T17:01:39.837308983Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.006481ms grafana | logger=migrator t=2025-06-07T17:01:39.843228347Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-07T17:01:39.84441334Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.184274ms grafana | logger=migrator t=2025-06-07T17:01:39.847472928Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-07T17:01:39.848724034Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.250476ms grafana | logger=migrator t=2025-06-07T17:01:39.853475686Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-07T17:01:39.855501741Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.025404ms grafana | logger=migrator t=2025-06-07T17:01:39.860276663Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-07T17:01:39.861345229Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.067456ms grafana | logger=migrator t=2025-06-07T17:01:39.865695736Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-07T17:01:39.867502527Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.806851ms grafana | logger=migrator t=2025-06-07T17:01:39.872947131Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-07T17:01:39.879853685Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.907384ms grafana | logger=migrator t=2025-06-07T17:01:39.884612517Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-07T17:01:39.885736076Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.122959ms grafana | logger=migrator t=2025-06-07T17:01:39.888777613Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-07T17:01:39.894304702Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.52579ms grafana | logger=migrator t=2025-06-07T17:01:39.898728014Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-07T17:01:39.899546414Z level=info msg="Migration successfully executed" id="create cache_data table" duration=815.019µs grafana | logger=migrator t=2025-06-07T17:01:39.905394843Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-07T17:01:39.906306789Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=914.037µs grafana | logger=migrator t=2025-06-07T17:01:39.909495315Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-07T17:01:39.910285523Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=790.168µs grafana | logger=migrator t=2025-06-07T17:01:39.913803549Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-07T17:01:39.915336863Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.532804ms grafana | logger=migrator t=2025-06-07T17:01:39.921214894Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-07T17:01:39.921232785Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=18.481µs grafana | logger=migrator t=2025-06-07T17:01:39.927114726Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-07T17:01:39.927197391Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=82.715µs grafana | logger=migrator t=2025-06-07T17:01:39.931085701Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-07T17:01:39.932954985Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.868184ms grafana | logger=migrator t=2025-06-07T17:01:39.937298282Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-07T17:01:39.938277682Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=977.519µs grafana | logger=migrator t=2025-06-07T17:01:39.944273159Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-07T17:01:39.945857637Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.583008ms grafana | logger=migrator t=2025-06-07T17:01:39.949688182Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-07T17:01:39.949726904Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=40.392µs grafana | logger=migrator t=2025-06-07T17:01:39.953235529Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-07T17:01:39.954164357Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=929.268µs grafana | logger=migrator t=2025-06-07T17:01:39.957080206Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-07T17:01:39.9579678Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=887.534µs grafana | logger=migrator t=2025-06-07T17:01:39.962558883Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-07T17:01:39.963488589Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=929.467µs grafana | logger=migrator t=2025-06-07T17:01:39.966456091Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-07T17:01:39.967500066Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.045795ms grafana | logger=migrator t=2025-06-07T17:01:39.973909289Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-07T17:01:39.980058377Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.148337ms grafana | logger=migrator t=2025-06-07T17:01:39.983464515Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-07T17:01:39.984420395Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=957.68µs grafana | logger=migrator t=2025-06-07T17:01:39.987492653Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-07T17:01:39.987569208Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=76.875µs grafana | logger=migrator t=2025-06-07T17:01:39.991821788Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-07T17:01:39.992683122Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=860.994µs grafana | logger=migrator t=2025-06-07T17:01:39.995931001Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-07T17:01:39.996960064Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.028733ms grafana | logger=migrator t=2025-06-07T17:01:40.000191572Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-07T17:01:40.001189584Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=997.712µs grafana | logger=migrator t=2025-06-07T17:01:40.036023727Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-07T17:01:40.036041078Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=18.321µs grafana | logger=migrator t=2025-06-07T17:01:40.042001284Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-07T17:01:40.042899139Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=897.285µs grafana | logger=migrator t=2025-06-07T17:01:40.04858133Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-07T17:01:40.049698758Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.116588ms grafana | logger=migrator t=2025-06-07T17:01:40.0529789Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-07T17:01:40.053927438Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=945.528µs grafana | logger=migrator t=2025-06-07T17:01:40.057261933Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-07T17:01:40.058198151Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=933.567µs grafana | logger=migrator t=2025-06-07T17:01:40.062399379Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.069270312Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.868973ms grafana | logger=migrator t=2025-06-07T17:01:40.072760226Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.073735186Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=974.4µs grafana | logger=migrator t=2025-06-07T17:01:40.07671775Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.07769461Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=975.64µs grafana | logger=migrator t=2025-06-07T17:01:40.083342717Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.110474466Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.131309ms grafana | logger=migrator t=2025-06-07T17:01:40.113719006Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.14126699Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.547444ms grafana | logger=migrator t=2025-06-07T17:01:40.144728273Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.14548866Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=757.916µs grafana | logger=migrator t=2025-06-07T17:01:40.151795037Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.152853963Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.057906ms grafana | logger=migrator t=2025-06-07T17:01:40.15753667Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-07T17:01:40.166930579Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.396109ms grafana | logger=migrator t=2025-06-07T17:01:40.173195804Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-07T17:01:40.179817411Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.619817ms grafana | logger=migrator t=2025-06-07T17:01:40.184776356Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:40.186101338Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.324052ms grafana | logger=migrator t=2025-06-07T17:01:40.190738133Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-07T17:01:40.191736705Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=998.202µs grafana | logger=migrator t=2025-06-07T17:01:40.196757003Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-07T17:01:40.198394374Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.635531ms grafana | logger=migrator t=2025-06-07T17:01:40.202035777Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-07T17:01:40.203759534Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.723197ms grafana | logger=migrator t=2025-06-07T17:01:40.207382076Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-07T17:01:40.207528095Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=147.119µs grafana | logger=migrator t=2025-06-07T17:01:40.21409607Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.220382866Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.286086ms grafana | logger=migrator t=2025-06-07T17:01:40.226469811Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.232756407Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.285657ms grafana | logger=migrator t=2025-06-07T17:01:40.236657078Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.243058431Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.374722ms grafana | logger=migrator t=2025-06-07T17:01:40.246156892Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-07T17:01:40.247194775Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.035804ms grafana | logger=migrator t=2025-06-07T17:01:40.253406138Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-07T17:01:40.25556699Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.161052ms grafana | logger=migrator t=2025-06-07T17:01:40.265546274Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.271715633Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.168619ms grafana | logger=migrator t=2025-06-07T17:01:40.278058503Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.28890062Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=10.845227ms grafana | logger=migrator t=2025-06-07T17:01:40.292999523Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-07T17:01:40.293790701Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=790.718µs grafana | logger=migrator t=2025-06-07T17:01:40.296470646Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:40.301214008Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.741862ms grafana | logger=migrator t=2025-06-07T17:01:40.307015705Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:40.313676254Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.659789ms grafana | logger=migrator t=2025-06-07T17:01:40.317158568Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:40.317178129Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=20.211µs grafana | logger=migrator t=2025-06-07T17:01:40.32011802Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:40.321124103Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.005913ms grafana | logger=migrator t=2025-06-07T17:01:40.326431219Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-07T17:01:40.327597421Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.164551ms grafana | logger=migrator t=2025-06-07T17:01:40.331047473Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-07T17:01:40.332800011Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.751959ms grafana | logger=migrator t=2025-06-07T17:01:40.337613517Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-07T17:01:40.337633868Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=21.401µs grafana | logger=migrator t=2025-06-07T17:01:40.340556188Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:40.347575609Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.022441ms grafana | logger=migrator t=2025-06-07T17:01:40.352934109Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:40.359673254Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.737465ms grafana | logger=migrator t=2025-06-07T17:01:40.363533181Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:40.368240771Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.70472ms grafana | logger=migrator t=2025-06-07T17:01:40.373252208Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:40.380276241Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.023033ms grafana | logger=migrator t=2025-06-07T17:01:40.387482014Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-07T17:01:40.39831853Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=10.837307ms grafana | logger=migrator t=2025-06-07T17:01:40.401803325Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:40.401818196Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=15.211µs grafana | logger=migrator t=2025-06-07T17:01:40.405390356Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-07T17:01:40.406126291Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=735.225µs grafana | logger=migrator t=2025-06-07T17:01:40.412206105Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.42123076Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.028896ms grafana | logger=migrator t=2025-06-07T17:01:40.424080766Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-07T17:01:40.424101057Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=20.561µs grafana | logger=migrator t=2025-06-07T17:01:40.427104972Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.434261251Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.135199ms grafana | logger=migrator t=2025-06-07T17:01:40.437362192Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-07T17:01:40.438354894Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=991.971µs grafana | logger=migrator t=2025-06-07T17:01:40.442816907Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.453648544Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.831497ms grafana | logger=migrator t=2025-06-07T17:01:40.457455018Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-07T17:01:40.458107778Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=651.881µs grafana | logger=migrator t=2025-06-07T17:01:40.461006057Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-07T17:01:40.461965095Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=957.878µs grafana | logger=migrator t=2025-06-07T17:01:40.466529806Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.473327554Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.799678ms grafana | logger=migrator t=2025-06-07T17:01:40.47635076Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-07T17:01:40.477158889Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=807.919µs grafana | logger=migrator t=2025-06-07T17:01:40.480155265Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-07T17:01:40.481189868Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.033833ms grafana | logger=migrator t=2025-06-07T17:01:40.527911672Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-07T17:01:40.529396113Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.48038ms grafana | logger=migrator t=2025-06-07T17:01:40.534312175Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-07T17:01:40.535961786Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.648511ms grafana | logger=migrator t=2025-06-07T17:01:40.539439091Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-07T17:01:40.539461192Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=22.761µs grafana | logger=migrator t=2025-06-07T17:01:40.543638799Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-07T17:01:40.544695334Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.056175ms grafana | logger=migrator t=2025-06-07T17:01:40.547612814Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.548639817Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.026722ms grafana | logger=migrator t=2025-06-07T17:01:40.552313842Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-07T17:01:40.552706676Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-07T17:01:40.556813529Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-07T17:01:40.557189582Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=375.183µs grafana | logger=migrator t=2025-06-07T17:01:40.559877218Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-07T17:01:40.561009247Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.131109ms grafana | logger=migrator t=2025-06-07T17:01:40.564174642Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-07T17:01:40.571325792Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.148679ms grafana | logger=migrator t=2025-06-07T17:01:40.575723003Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-07T17:01:40.576860072Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.136299ms grafana | logger=migrator t=2025-06-07T17:01:40.580090111Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-07T17:01:40.58120765Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.117379ms grafana | logger=migrator t=2025-06-07T17:01:40.584528204Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-07T17:01:40.585355765Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=826.841µs grafana | logger=migrator t=2025-06-07T17:01:40.59063958Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-07T17:01:40.59242346Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.78352ms grafana | logger=migrator t=2025-06-07T17:01:40.595745084Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:40.597424007Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.679433ms grafana | logger=migrator t=2025-06-07T17:01:40.603220213Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-07T17:01:40.603262857Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=44.024µs grafana | logger=migrator t=2025-06-07T17:01:40.612774861Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-07T17:01:40.612808543Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=34.632µs grafana | logger=migrator t=2025-06-07T17:01:40.616044072Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-07T17:01:40.627132354Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=11.125254ms grafana | logger=migrator t=2025-06-07T17:01:40.630353332Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-07T17:01:40.630860443Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=506.971µs grafana | logger=migrator t=2025-06-07T17:01:40.633968034Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-07T17:01:40.635390352Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.421828ms grafana | logger=migrator t=2025-06-07T17:01:40.641570832Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-07T17:01:40.641913963Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=340.331µs grafana | logger=migrator t=2025-06-07T17:01:40.64998008Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-07T17:01:40.651594789Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.6139ms grafana | logger=migrator t=2025-06-07T17:01:40.654899662Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-07T17:01:40.656428916Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.526344ms grafana | logger=migrator t=2025-06-07T17:01:40.660269393Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-07T17:01:40.694843389Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.576677ms grafana | logger=migrator t=2025-06-07T17:01:40.697650142Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-07T17:01:40.705023735Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.371793ms grafana | logger=migrator t=2025-06-07T17:01:40.710193744Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-07T17:01:40.710375005Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=180.171µs grafana | logger=migrator t=2025-06-07T17:01:40.713355928Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-07T17:01:40.7446047Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.248292ms grafana | logger=migrator t=2025-06-07T17:01:40.747709641Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-07T17:01:40.777131291Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.42071ms grafana | logger=migrator t=2025-06-07T17:01:40.780118185Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-07T17:01:40.78085347Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=734.635µs grafana | logger=migrator t=2025-06-07T17:01:40.784063857Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-07T17:01:40.784857665Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=793.768µs grafana | logger=migrator t=2025-06-07T17:01:40.788782037Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-07T17:01:40.789272907Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=494.21µs grafana | logger=migrator t=2025-06-07T17:01:40.793636395Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-07T17:01:40.795192271Z level=info msg="Migration successfully executed" id="create permission table" duration=1.554956ms grafana | logger=migrator t=2025-06-07T17:01:40.799934033Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-07T17:01:40.801137827Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.203484ms grafana | logger=migrator t=2025-06-07T17:01:40.807893863Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-07T17:01:40.809791519Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.895076ms grafana | logger=migrator t=2025-06-07T17:01:40.813838769Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-07T17:01:40.814932325Z level=info msg="Migration successfully executed" id="create role table" duration=1.093117ms grafana | logger=migrator t=2025-06-07T17:01:40.819595103Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-07T17:01:40.827054021Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.458319ms grafana | logger=migrator t=2025-06-07T17:01:40.830184534Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-07T17:01:40.837825743Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.640319ms grafana | logger=migrator t=2025-06-07T17:01:40.842983611Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-07T17:01:40.84411099Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.126919ms grafana | logger=migrator t=2025-06-07T17:01:40.849980622Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-07T17:01:40.851936241Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.955299ms grafana | logger=migrator t=2025-06-07T17:01:40.855693513Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:40.857194055Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.500942ms grafana | logger=migrator t=2025-06-07T17:01:40.86069611Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-07T17:01:40.861712743Z level=info msg="Migration successfully executed" id="create team role table" duration=1.016213ms grafana | logger=migrator t=2025-06-07T17:01:40.866658668Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-07T17:01:40.867881432Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.222295ms grafana | logger=migrator t=2025-06-07T17:01:40.871066918Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-07T17:01:40.872399671Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.327192ms grafana | logger=migrator t=2025-06-07T17:01:40.875730525Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-07T17:01:40.876996033Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.280079ms grafana | logger=migrator t=2025-06-07T17:01:40.880247103Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-07T17:01:40.881281257Z level=info msg="Migration successfully executed" id="create user role table" duration=1.033495ms grafana | logger=migrator t=2025-06-07T17:01:40.886129244Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-07T17:01:40.88735049Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.220936ms grafana | logger=migrator t=2025-06-07T17:01:40.896645721Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-07T17:01:40.897991875Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.345273ms grafana | logger=migrator t=2025-06-07T17:01:40.901238264Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-07T17:01:40.903156432Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.917588ms grafana | logger=migrator t=2025-06-07T17:01:40.90669614Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-07T17:01:40.907893153Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.197513ms grafana | logger=migrator t=2025-06-07T17:01:40.911222548Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-07T17:01:40.91239361Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.170162ms grafana | logger=migrator t=2025-06-07T17:01:40.915619119Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-07T17:01:40.917697697Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.076518ms grafana | logger=migrator t=2025-06-07T17:01:40.922602148Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-07T17:01:40.931327855Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.726118ms grafana | logger=migrator t=2025-06-07T17:01:40.937814204Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-07T17:01:40.939013437Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.199313ms grafana | logger=migrator t=2025-06-07T17:01:40.943656383Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-07T17:01:40.94603875Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.380866ms grafana | logger=migrator t=2025-06-07T17:01:40.94945778Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:40.951325805Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.862985ms grafana | logger=migrator t=2025-06-07T17:01:40.954846421Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-07T17:01:40.955993691Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.14698ms grafana | logger=migrator t=2025-06-07T17:01:40.960495979Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-07T17:01:40.961896295Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.399516ms grafana | logger=migrator t=2025-06-07T17:01:40.966219951Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-07T17:01:40.968163381Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.942559ms grafana | logger=migrator t=2025-06-07T17:01:40.973600115Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-07T17:01:40.98180364Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.202745ms grafana | logger=migrator t=2025-06-07T17:01:41.020287526Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-07T17:01:41.031450313Z level=info msg="Migration successfully executed" id="permission kind migration" duration=11.163377ms grafana | logger=migrator t=2025-06-07T17:01:41.034277317Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-07T17:01:41.042644571Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.351724ms grafana | logger=migrator t=2025-06-07T17:01:41.04716835Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-07T17:01:41.053088264Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.919994ms grafana | logger=migrator t=2025-06-07T17:01:41.05724188Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-07T17:01:41.058378129Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.136479ms grafana | logger=migrator t=2025-06-07T17:01:41.061918687Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-07T17:01:41.063711667Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.79112ms grafana | logger=migrator t=2025-06-07T17:01:41.068563576Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-07T17:01:41.070681986Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=2.130811ms grafana | logger=migrator t=2025-06-07T17:01:41.076859977Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-07T17:01:41.085316917Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.456771ms grafana | logger=migrator t=2025-06-07T17:01:41.089332583Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-07T17:01:41.090599501Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.266528ms grafana | logger=migrator t=2025-06-07T17:01:41.095365654Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-07T17:01:41.096497024Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.13107ms grafana | logger=migrator t=2025-06-07T17:01:41.099721353Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-07T17:01:41.100645969Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=923.577µs grafana | logger=migrator t=2025-06-07T17:01:41.104090261Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-07T17:01:41.105250483Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.162652ms grafana | logger=migrator t=2025-06-07T17:01:41.109668824Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-07T17:01:41.109683965Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=15.831µs grafana | logger=migrator t=2025-06-07T17:01:41.114372023Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-07T17:01:41.115325792Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=953.709µs grafana | logger=migrator t=2025-06-07T17:01:41.118882261Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-07T17:01:41.118926864Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=45.013µs grafana | logger=migrator t=2025-06-07T17:01:41.124372659Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-07T17:01:41.12488838Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=513.201µs grafana | logger=migrator t=2025-06-07T17:01:41.134535803Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-07T17:01:41.135532165Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=997.332µs grafana | logger=migrator t=2025-06-07T17:01:41.139379492Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-07T17:01:41.140534452Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.149681ms grafana | logger=migrator t=2025-06-07T17:01:41.144408571Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-07T17:01:41.144735861Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=324.36µs grafana | logger=migrator t=2025-06-07T17:01:41.148213815Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-07T17:01:41.148739907Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=526.062µs grafana | logger=migrator t=2025-06-07T17:01:41.15398515Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-07T17:01:41.155268199Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.282289ms grafana | logger=migrator t=2025-06-07T17:01:41.160106466Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-07T17:01:41.162089988Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.982132ms grafana | logger=migrator t=2025-06-07T17:01:41.165675289Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-07T17:01:41.174500421Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.824333ms grafana | logger=migrator t=2025-06-07T17:01:41.181600758Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-07T17:01:41.18161986Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=19.622µs grafana | logger=migrator t=2025-06-07T17:01:41.184858439Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-07T17:01:41.186572475Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.712845ms grafana | logger=migrator t=2025-06-07T17:01:41.19122659Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-07T17:01:41.192318418Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.091368ms grafana | logger=migrator t=2025-06-07T17:01:41.195833074Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-07T17:01:41.196957283Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.123648ms grafana | logger=migrator t=2025-06-07T17:01:41.20487572Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-07T17:01:41.213391574Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.515263ms grafana | logger=migrator t=2025-06-07T17:01:41.221931039Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.223001025Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.069346ms grafana | logger=migrator t=2025-06-07T17:01:41.227007981Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.22877724Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.768909ms grafana | logger=migrator t=2025-06-07T17:01:41.234802511Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:41.256141884Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.341093ms grafana | logger=migrator t=2025-06-07T17:01:41.26226061Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-07T17:01:41.263106072Z level=info msg="Migration successfully executed" id="create correlation v2" duration=844.921µs grafana | logger=migrator t=2025-06-07T17:01:41.270645926Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.272393693Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.747097ms grafana | logger=migrator t=2025-06-07T17:01:41.277418872Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.279107996Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.688464ms grafana | logger=migrator t=2025-06-07T17:01:41.282704597Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-07T17:01:41.283767022Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.061645ms grafana | logger=migrator t=2025-06-07T17:01:41.287893326Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:41.288250368Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=356.092µs grafana | logger=migrator t=2025-06-07T17:01:41.291963926Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:41.292754156Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=789.349µs grafana | logger=migrator t=2025-06-07T17:01:41.297074531Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-07T17:01:41.305637527Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.562266ms grafana | logger=migrator t=2025-06-07T17:01:41.310693889Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-07T17:01:41.31917452Z level=info msg="Migration successfully executed" id="add type column" duration=8.479732ms grafana | logger=migrator t=2025-06-07T17:01:41.323484706Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-07T17:01:41.324309316Z level=info msg="Migration successfully executed" id="create entity_events table" duration=824.35µs grafana | logger=migrator t=2025-06-07T17:01:41.330051029Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-07T17:01:41.331636497Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.588048ms grafana | logger=migrator t=2025-06-07T17:01:41.337827848Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.338300297Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.342817594Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.343287574Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.346753877Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-07T17:01:41.347980102Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.226705ms grafana | logger=migrator t=2025-06-07T17:01:41.352415525Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-07T17:01:41.354394177Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.977652ms grafana | logger=migrator t=2025-06-07T17:01:41.359084635Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.360200584Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.115649ms grafana | logger=migrator t=2025-06-07T17:01:41.369986835Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-07T17:01:41.371936756Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.94859ms grafana | logger=migrator t=2025-06-07T17:01:41.378809038Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.38046002Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.651402ms grafana | logger=migrator t=2025-06-07T17:01:41.383761633Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.384768734Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.006501ms grafana | logger=migrator t=2025-06-07T17:01:41.389292373Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-07T17:01:41.390562572Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.268708ms grafana | logger=migrator t=2025-06-07T17:01:41.396266302Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-07T17:01:41.398058402Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.79146ms grafana | logger=migrator t=2025-06-07T17:01:41.401077888Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.402475314Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.396836ms grafana | logger=migrator t=2025-06-07T17:01:41.408311283Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:41.410043969Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.731876ms grafana | logger=migrator t=2025-06-07T17:01:41.415165024Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-07T17:01:41.416426002Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.261848ms grafana | logger=migrator t=2025-06-07T17:01:41.419790079Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-07T17:01:41.441640613Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.850304ms grafana | logger=migrator t=2025-06-07T17:01:41.449459004Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-07T17:01:41.459824201Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.362507ms grafana | logger=migrator t=2025-06-07T17:01:41.465143658Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-07T17:01:41.472395945Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.251257ms grafana | logger=migrator t=2025-06-07T17:01:41.475521057Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-07T17:01:41.475746151Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=223.413µs grafana | logger=migrator t=2025-06-07T17:01:41.507781271Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-07T17:01:41.518259825Z level=info msg="Migration successfully executed" id="add share column" duration=10.478955ms grafana | logger=migrator t=2025-06-07T17:01:41.521557628Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-07T17:01:41.521731099Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=172.741µs grafana | logger=migrator t=2025-06-07T17:01:41.525017752Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-07T17:01:41.525962919Z level=info msg="Migration successfully executed" id="create file table" duration=944.667µs grafana | logger=migrator t=2025-06-07T17:01:41.52971317Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-07T17:01:41.530981618Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.266988ms grafana | logger=migrator t=2025-06-07T17:01:41.536509598Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-07T17:01:41.538279037Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.768448ms grafana | logger=migrator t=2025-06-07T17:01:41.541958314Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-07T17:01:41.542828557Z level=info msg="Migration successfully executed" id="create file_meta table" duration=869.474µs grafana | logger=migrator t=2025-06-07T17:01:41.550474317Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-07T17:01:41.552608168Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.133251ms grafana | logger=migrator t=2025-06-07T17:01:41.559751598Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-07T17:01:41.559770749Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=21.992µs grafana | logger=migrator t=2025-06-07T17:01:41.565548834Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-07T17:01:41.565567455Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=18.731µs grafana | logger=migrator t=2025-06-07T17:01:41.571389643Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-07T17:01:41.572339981Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=950.088µs grafana | logger=migrator t=2025-06-07T17:01:41.577075763Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-07T17:01:41.577617286Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=543.753µs grafana | logger=migrator t=2025-06-07T17:01:41.584112676Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-07T17:01:41.585137549Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.021453ms grafana | logger=migrator t=2025-06-07T17:01:41.59019676Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-07T17:01:41.599616589Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.418659ms grafana | logger=migrator t=2025-06-07T17:01:41.604748055Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-07T17:01:41.604898854Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=149.089µs grafana | logger=migrator t=2025-06-07T17:01:41.609720611Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-07T17:01:41.611754606Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.033724ms grafana | logger=migrator t=2025-06-07T17:01:41.61847691Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-07T17:01:41.618866624Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=389.933µs grafana | logger=migrator t=2025-06-07T17:01:41.622052219Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-07T17:01:41.622259032Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=206.843µs grafana | logger=migrator t=2025-06-07T17:01:41.626415768Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-07T17:01:41.627206767Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=790.578µs grafana | logger=migrator t=2025-06-07T17:01:41.634183446Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-07T17:01:41.646759239Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.579784ms grafana | logger=migrator t=2025-06-07T17:01:41.651884185Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-07T17:01:41.662891692Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.007748ms grafana | logger=migrator t=2025-06-07T17:01:41.667309013Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-07T17:01:41.668133944Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=824.421µs grafana | logger=migrator t=2025-06-07T17:01:41.674814335Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-07T17:01:41.755622135Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=80.80874ms grafana | logger=migrator t=2025-06-07T17:01:41.75862959Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-07T17:01:41.759488313Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=858.223µs grafana | logger=migrator t=2025-06-07T17:01:41.763305598Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-07T17:01:41.764133829Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=827.801µs grafana | logger=migrator t=2025-06-07T17:01:41.769454146Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-07T17:01:41.797633079Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.157842ms grafana | logger=migrator t=2025-06-07T17:01:41.803928296Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-07T17:01:41.815177498Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=11.248262ms grafana | logger=migrator t=2025-06-07T17:01:41.819653084Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-07T17:01:41.819871737Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=217.953µs grafana | logger=migrator t=2025-06-07T17:01:41.823470138Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-07T17:01:41.823636878Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=166.45µs grafana | logger=migrator t=2025-06-07T17:01:41.825914719Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-07T17:01:41.826114061Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=198.822µs grafana | logger=migrator t=2025-06-07T17:01:41.82918362Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-07T17:01:41.829382252Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=198.513µs grafana | logger=migrator t=2025-06-07T17:01:41.834733811Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-07T17:01:41.834929223Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=195.372µs grafana | logger=migrator t=2025-06-07T17:01:41.837934138Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-07T17:01:41.838807912Z level=info msg="Migration successfully executed" id="create folder table" duration=873.304µs grafana | logger=migrator t=2025-06-07T17:01:41.841791396Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-07T17:01:41.843561714Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.769969ms grafana | logger=migrator t=2025-06-07T17:01:41.850535783Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-07T17:01:41.852484212Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.948439ms grafana | logger=migrator t=2025-06-07T17:01:41.855486467Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-07T17:01:41.85552667Z level=info msg="Migration successfully executed" id="Update folder title length" duration=41.213µs grafana | logger=migrator t=2025-06-07T17:01:41.859766471Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-07T17:01:41.860890049Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.123018ms grafana | logger=migrator t=2025-06-07T17:01:41.865969052Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-07T17:01:41.86722258Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.250887ms grafana | logger=migrator t=2025-06-07T17:01:41.87505133Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-07T17:01:41.877104267Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.052297ms grafana | logger=migrator t=2025-06-07T17:01:41.881364429Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-07T17:01:41.882038951Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=674.472µs grafana | logger=migrator t=2025-06-07T17:01:41.885462471Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-07T17:01:41.885884187Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=421.996µs grafana | logger=migrator t=2025-06-07T17:01:41.891609979Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-07T17:01:41.893370898Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.760308ms grafana | logger=migrator t=2025-06-07T17:01:41.896793588Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-07T17:01:41.897915178Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.121019ms grafana | logger=migrator t=2025-06-07T17:01:41.901840889Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-07T17:01:41.903836152Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.103229ms grafana | logger=migrator t=2025-06-07T17:01:41.907477055Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-07T17:01:41.908665438Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.188003ms grafana | logger=migrator t=2025-06-07T17:01:41.912949082Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-07T17:01:41.914108253Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.158651ms grafana | logger=migrator t=2025-06-07T17:01:41.919518406Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-07T17:01:41.921022429Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.503532ms grafana | logger=migrator t=2025-06-07T17:01:41.925296011Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-07T17:01:41.926365527Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.068956ms grafana | logger=migrator t=2025-06-07T17:01:41.931428859Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-07T17:01:41.933632584Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.198894ms grafana | logger=migrator t=2025-06-07T17:01:41.940589992Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-07T17:01:41.941681919Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.091577ms grafana | logger=migrator t=2025-06-07T17:01:41.947168056Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-07T17:01:41.948482478Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.312622ms grafana | logger=migrator t=2025-06-07T17:01:41.953804645Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-07T17:01:41.955855291Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.050115ms grafana | logger=migrator t=2025-06-07T17:01:41.960119333Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-07T17:01:41.961315387Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.195813ms grafana | logger=migrator t=2025-06-07T17:01:41.964387155Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-07T17:01:41.964686664Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=299.499µs grafana | logger=migrator t=2025-06-07T17:01:41.999954863Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-07T17:01:42.01290685Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.951967ms grafana | logger=migrator t=2025-06-07T17:01:42.01795402Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-07T17:01:42.018453532Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=499.972µs grafana | logger=migrator t=2025-06-07T17:01:42.023041143Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-07T17:01:42.023071875Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=31.412µs grafana | logger=migrator t=2025-06-07T17:01:42.026503527Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-07T17:01:42.028648708Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.145211ms grafana | logger=migrator t=2025-06-07T17:01:42.034587523Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-07T17:01:42.034606154Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.521µs grafana | logger=migrator t=2025-06-07T17:01:42.039597222Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-07T17:01:42.041599335Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.000724ms grafana | logger=migrator t=2025-06-07T17:01:42.050192923Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-07T17:01:42.051365905Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.172422ms grafana | logger=migrator t=2025-06-07T17:01:42.056684192Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-07T17:01:42.058418409Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.733577ms grafana | logger=migrator t=2025-06-07T17:01:42.061979098Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-07T17:01:42.06362294Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.643391ms grafana | logger=migrator t=2025-06-07T17:01:42.06818353Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-07T17:01:42.069023132Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=840.963µs grafana | logger=migrator t=2025-06-07T17:01:42.073352738Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-07T17:01:42.073619454Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=267.126µs grafana | logger=migrator t=2025-06-07T17:01:42.077074937Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-07T17:01:42.077856045Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=778.898µs grafana | logger=migrator t=2025-06-07T17:01:42.081439965Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-07T17:01:42.082847122Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.406137ms grafana | logger=migrator t=2025-06-07T17:01:42.087826148Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-07T17:01:42.088811979Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=985.491µs grafana | logger=migrator t=2025-06-07T17:01:42.096211534Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-07T17:01:42.107801637Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.591024ms grafana | logger=migrator t=2025-06-07T17:01:42.111772401Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-07T17:01:42.119481735Z level=info msg="Migration successfully executed" id="add region_slug column" duration=7.708624ms grafana | logger=migrator t=2025-06-07T17:01:42.126673897Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-07T17:01:42.136022363Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.347616ms grafana | logger=migrator t=2025-06-07T17:01:42.139525328Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-07T17:01:42.149005231Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.478343ms grafana | logger=migrator t=2025-06-07T17:01:42.152265051Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-07T17:01:42.152437712Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=179.891µs grafana | logger=migrator t=2025-06-07T17:01:42.15794167Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-07T17:01:42.160196949Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.254169ms grafana | logger=migrator t=2025-06-07T17:01:42.165277602Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-07T17:01:42.174859321Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.579889ms grafana | logger=migrator t=2025-06-07T17:01:42.182578706Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-07T17:01:42.182702864Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=123.307µs grafana | logger=migrator t=2025-06-07T17:01:42.189372484Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-07T17:01:42.191224018Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.850623ms grafana | logger=migrator t=2025-06-07T17:01:42.196095067Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:42.225465924Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=29.371187ms grafana | logger=migrator t=2025-06-07T17:01:42.238937572Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-07T17:01:42.239637866Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=699.304µs grafana | logger=migrator t=2025-06-07T17:01:42.242657512Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:42.243506864Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=848.752µs grafana | logger=migrator t=2025-06-07T17:01:42.251496495Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:42.25174466Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=247.465µs grafana | logger=migrator t=2025-06-07T17:01:42.256486542Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:42.258037787Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.553125ms grafana | logger=migrator t=2025-06-07T17:01:42.265336977Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-07T17:01:42.289254957Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=23.918921ms grafana | logger=migrator t=2025-06-07T17:01:42.293265194Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-07T17:01:42.293944166Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=678.732µs grafana | logger=migrator t=2025-06-07T17:01:42.298125923Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-07T17:01:42.299239872Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.110559ms grafana | logger=migrator t=2025-06-07T17:01:42.30391999Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-07T17:01:42.304309484Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=390.144µs grafana | logger=migrator t=2025-06-07T17:01:42.31059025Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-07T17:01:42.311834366Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.243666ms grafana | logger=migrator t=2025-06-07T17:01:42.316570068Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-07T17:01:42.329097499Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.527541ms grafana | logger=migrator t=2025-06-07T17:01:42.332556251Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-07T17:01:42.342004462Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.447291ms grafana | logger=migrator t=2025-06-07T17:01:42.34603201Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-07T17:01:42.352936594Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=6.903464ms grafana | logger=migrator t=2025-06-07T17:01:42.359586503Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-07T17:01:42.371114003Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=11.5273ms grafana | logger=migrator t=2025-06-07T17:01:42.376007083Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-07T17:01:42.383575859Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=7.567556ms grafana | logger=migrator t=2025-06-07T17:01:42.387897285Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-07T17:01:42.397427331Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.529066ms grafana | logger=migrator t=2025-06-07T17:01:42.400863292Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-07T17:01:42.401491511Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=627.708µs grafana | logger=migrator t=2025-06-07T17:01:42.404972885Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-07T17:01:42.438612705Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=33.639479ms grafana | logger=migrator t=2025-06-07T17:01:42.443850427Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-07T17:01:42.451476855Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.625268ms grafana | logger=migrator t=2025-06-07T17:01:42.476603511Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-07T17:01:42.490118812Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=13.516061ms grafana | logger=migrator t=2025-06-07T17:01:42.493200682Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-07T17:01:42.501242066Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=8.040624ms grafana | logger=migrator t=2025-06-07T17:01:42.506567724Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-07T17:01:42.516149083Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.580649ms grafana | logger=migrator t=2025-06-07T17:01:42.521619449Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-07T17:01:42.52163318Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=13.831µs grafana | logger=migrator t=2025-06-07T17:01:42.531801066Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-07T17:01:42.531828388Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=28.722µs grafana | logger=migrator t=2025-06-07T17:01:42.539558923Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:42.553518952Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.960839ms grafana | logger=migrator t=2025-06-07T17:01:42.557300974Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.565503749Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.201775ms grafana | logger=migrator t=2025-06-07T17:01:42.570600693Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-07T17:01:42.570892341Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=291.157µs grafana | logger=migrator t=2025-06-07T17:01:42.583074619Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-07T17:01:42.583457643Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=384.884µs grafana | logger=migrator t=2025-06-07T17:01:42.586703463Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:42.595208917Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=8.503773ms grafana | logger=migrator t=2025-06-07T17:01:42.599205342Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.606436577Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.230135ms grafana | logger=migrator t=2025-06-07T17:01:42.610037628Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-07T17:01:42.620109657Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=10.070769ms grafana | logger=migrator t=2025-06-07T17:01:42.624033119Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-07T17:01:42.634038074Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=10.003765ms grafana | logger=migrator t=2025-06-07T17:01:42.642718089Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-07T17:01:42.643164996Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=446.547µs grafana | logger=migrator t=2025-06-07T17:01:42.647910127Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:42.659613448Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=11.701801ms grafana | logger=migrator t=2025-06-07T17:01:42.663180087Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.670312335Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=7.128938ms grafana | logger=migrator t=2025-06-07T17:01:42.674636622Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-07T17:01:42.674952481Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=314.81µs grafana | logger=migrator t=2025-06-07T17:01:42.680058685Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-07T17:01:42.680678633Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=613.277µs grafana | logger=migrator t=2025-06-07T17:01:42.685712063Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-07T17:01:42.68746371Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.750987ms grafana | logger=migrator t=2025-06-07T17:01:42.692731164Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-07T17:01:42.692752545Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=20.811µs grafana | logger=migrator t=2025-06-07T17:01:42.698104985Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-07T17:01:42.698130506Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=26.262µs grafana | logger=migrator t=2025-06-07T17:01:42.70191692Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-07T17:01:42.702505486Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=582.785µs grafana | logger=migrator t=2025-06-07T17:01:42.71589731Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.729465683Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=13.568113ms grafana | logger=migrator t=2025-06-07T17:01:42.73494176Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:42.744912104Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.969944ms grafana | logger=migrator t=2025-06-07T17:01:42.748148743Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-07T17:01:42.749168255Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.018822ms grafana | logger=migrator t=2025-06-07T17:01:42.753965081Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-07T17:01:42.755169465Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.203435ms grafana | logger=migrator t=2025-06-07T17:01:42.760220285Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-07T17:01:42.772212633Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.993068ms grafana | logger=migrator t=2025-06-07T17:01:42.776352878Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.784337909Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.984352ms grafana | logger=migrator t=2025-06-07T17:01:42.787580908Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-07T17:01:42.787602219Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-07T17:01:42.787808602Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-07T17:01:42.787825843Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=243.895µs grafana | logger=migrator t=2025-06-07T17:01:42.79314611Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-07T17:01:42.793711355Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=564.425µs grafana | logger=migrator t=2025-06-07T17:01:42.799530013Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-07T17:01:42.801277191Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.746378ms grafana | logger=migrator t=2025-06-07T17:01:42.804986379Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-07T17:01:42.806410737Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.424568ms grafana | logger=migrator t=2025-06-07T17:01:42.812435487Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-07T17:01:42.813561946Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.125559ms grafana | logger=migrator t=2025-06-07T17:01:42.819135119Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-07T17:01:42.820261279Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.125739ms grafana | logger=migrator t=2025-06-07T17:01:42.824451376Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:42.835957073Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.506737ms grafana | logger=migrator t=2025-06-07T17:01:42.839293569Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:42.848590911Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.294952ms grafana | logger=migrator t=2025-06-07T17:01:42.856848419Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-07T17:01:42.868542828Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=11.695819ms grafana | logger=migrator t=2025-06-07T17:01:42.872009281Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-07T17:01:42.879237996Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=7.227715ms grafana | logger=migrator t=2025-06-07T17:01:42.882892051Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-07T17:01:42.883082322Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-07T17:01:42.883099903Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=207.793µs grafana | logger=migrator t=2025-06-07T17:01:42.887475843Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-07T17:01:42.888658055Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.182082ms grafana | logger=migrator t=2025-06-07T17:01:42.892230136Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.147234915s grafana | logger=migrator t=2025-06-07T17:01:42.893148432Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-07T17:01:42.913438229Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-07T17:01:42.913639992Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-07T17:01:42.918236275Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-07T17:01:43.009882052Z level=info msg="Restored cache from database" duration=439.068µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.018437028Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-07T17:01:43.018452809Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-07T17:01:43.025868375Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-07T17:01:43.026579779Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=711.035µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.030793918Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-07T17:01:43.030808259Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=15.081µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.035413872Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-07T17:01:43.035493867Z level=info msg="Migration successfully executed" id="drop table resource" duration=79.595µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.037832331Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-07T17:01:43.038862274Z level=info msg="Migration successfully executed" id="create table resource" duration=1.029783ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.042897612Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-07T17:01:43.04530384Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=2.401018ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.051442167Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.051523972Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=81.515µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.060215278Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.061999877Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.784469ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.066352345Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-07T17:01:43.068466555Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.112149ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.073288821Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-07T17:01:43.074446423Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.157002ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.077791989Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-07T17:01:43.077873354Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=81.385µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.080787683Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-07T17:01:43.081621994Z level=info msg="Migration successfully executed" id="create table resource_version" duration=833.971µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.085037274Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-07T17:01:43.08659198Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.553066ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.091256767Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-07T17:01:43.091490291Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=233.184µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.094255622Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-07T17:01:43.095424623Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.168392ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.099941751Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-07T17:01:43.101266712Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.323701ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.108183158Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-07T17:01:43.109415734Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.232435ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.113798144Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.126793142Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=12.994359ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.131359323Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-07T17:01:43.138664713Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=7.30535ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.142069212Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-07T17:01:43.143229443Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.160971ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.147927853Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-07T17:01:43.149539701Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.609868ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.155147446Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.166145203Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.999166ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.171280989Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-07T17:01:43.179734479Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=8.451369ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.183709394Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-07T17:01:43.183750766Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-07T17:01:43.184501902Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=792.069µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.19861691Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-07T17:01:43.200653626Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.036076ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.208399021Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.220283283Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.883662ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.225265059Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-07T17:01:43.226195487Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=930.158µs grafana | logger=resource-migrator t=2025-06-07T17:01:43.230120068Z level=info msg="migrations completed" performed=26 skipped=0 duration=204.293346ms grafana | logger=resource-migrator t=2025-06-07T17:01:43.230581056Z level=info msg="Unlocking database" grafana | t=2025-06-07T17:01:43.230756337Z level=info caller=logger.go:214 time=2025-06-07T17:01:43.230734075Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-07T17:01:43.243309939Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-07T17:01:43.279936862Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-07T17:01:43.279958993Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-07T17:01:43.279983745Z level=info msg="Plugins loaded" count=53 duration=36.675106ms grafana | logger=query_data t=2025-06-07T17:01:43.284699845Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-07T17:01:43.289383093Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-07T17:01:43.316184102Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-07T17:01:43.324814652Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-07T17:01:43.324840654Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-07T17:01:43.327756413Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-07T17:01:43.328382522Z level=info msg="Warming state cache for startup" grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:43.331035585Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-07T17:01:43.331206325Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-07T17:01:43.331171263Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafanaStorageLogger t=2025-06-07T17:01:43.348072863Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2025-06-07T17:01:43.399368328Z level=info msg="State cache has been initialized" states=0 duration=70.988567ms grafana | logger=ngalert.scheduler t=2025-06-07T17:01:43.399680097Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-07T17:01:43.399909181Z level=info msg=starting first_tick=2025-06-07T17:01:50Z grafana | logger=plugins.update.checker t=2025-06-07T17:01:43.409224284Z level=info msg="Update check succeeded" duration=79.67238ms grafana | logger=grafana.update.checker t=2025-06-07T17:01:43.41972117Z level=info msg="Update check succeeded" duration=91.253053ms grafana | logger=provisioning.datasources t=2025-06-07T17:01:43.52571745Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-07T17:01:43.54393608Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-07T17:01:43.543974802Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-07T17:01:43.54556506Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-07T17:01:43.589071376Z level=info msg="Patterns update finished" duration=92.326539ms grafana | logger=plugin.installer t=2025-06-07T17:01:43.654060034Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-07T17:01:43.787994801Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-07T17:01:43.820392184Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:43.820423856Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=489.183529ms grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:43.820448568Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.968365186Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=plugin.installer t=2025-06-07T17:01:43.9690819Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.970951555Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.971812338Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.972612637Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.976797555Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.977567562Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.978166028Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.978730633Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-07T17:01:43.979590126Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=installer.fs t=2025-06-07T17:01:44.038277646Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=app-registry t=2025-06-07T17:01:44.043053119Z level=info msg="app registry initialized" grafana | logger=plugins.registration t=2025-06-07T17:01:44.054136881Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:44.054162562Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=233.705404ms grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:44.054187064Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-07T17:01:44.198888355Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-07T17:01:44.263164598Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-07T17:01:44.279412638Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:44.279545346Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=225.347922ms grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:44.279645002Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-07T17:01:44.449314147Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-07T17:01:44.518248628Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=provisioning.dashboard t=2025-06-07T17:01:44.520007806Z level=info msg="finished to provision dashboards" grafana | logger=plugins.registration t=2025-06-07T17:01:44.548913684Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-07T17:01:44.548941105Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=269.208278ms grafana | logger=infra.usagestats t=2025-06-07T17:02:20.357376292Z level=info msg="Usage stats are ready to report" =================================== ======== Logs from kafka ======== kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-07 17:01:34,024] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.version=17.0.14 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka_2.13-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.9.1-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/jackson-core-2.16.0.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.9.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.16.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-databind-2.16.0.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.11.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.16.0.jar:/usr/share/java/cp-base-new/common-utils-7.9.1.jar:/usr/share/java/cp-base-new/kafka-server-common-7.9.1-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.11.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.5.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/utility-belt-7.9.1-52.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.16.0.jar:/usr/share/java/cp-base-new/kafka-server-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.9.1-ccs.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.16.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.16.0.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.6-4.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.9.1-ccs.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.9.1-ccs.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.memory.free=500MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.memory.max=8044MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,025] INFO Client environment:os.memory.total=512MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,028] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@43a25848 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,031] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-07 17:01:34,035] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-07 17:01:34,040] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:34,051] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:34,052] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:34,059] INFO Socket connection established, initiating session, client: /172.17.0.6:38786, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:34,084] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000022fdf0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:34,201] INFO Session: 0x10000022fdf0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:34,201] INFO EventThread shut down for session: 0x10000022fdf0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-07 17:01:34,785] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-07 17:01:34,954] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-07 17:01:35,024] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-07 17:01:35,025] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:35,026] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:35,044] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-07 17:01:35,046] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,046] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,046] INFO Client environment:java.version=17.0.14 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,046] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,046] INFO Client environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.memory.free=988MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,047] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,049] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@22f59fa (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-07 17:01:35,052] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-07 17:01:35,056] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:35,057] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-07 17:01:35,060] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:35,066] INFO Socket connection established, initiating session, client: /172.17.0.6:38788, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:35,075] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000022fdf0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-07 17:01:35,079] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-07 17:01:35,388] INFO Cluster ID = -llQQseyRW2G0bCR4Q7_Yw (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:35,461] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.gzip.level = -1 kafka | compression.lz4.level = 9 kafka | compression.type = producer kafka | compression.zstd.level = 3 kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.bootstrap.servers = [] kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | eligible.leader.replicas.enable = false kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.migration.policy = disabled kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.append.linger.ms = 10 kafka | group.coordinator.new.enable = false kafka | group.coordinator.rebalance.protocols = [classic] kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | group.share.delivery.count.limit = 5 kafka | group.share.enable = false kafka | group.share.heartbeat.interval.ms = 5000 kafka | group.share.max.groups = 10 kafka | group.share.max.heartbeat.interval.ms = 15000 kafka | group.share.max.record.lock.duration.ms = 60000 kafka | group.share.max.session.timeout.ms = 60000 kafka | group.share.max.size = 200 kafka | group.share.min.heartbeat.interval.ms = 5000 kafka | group.share.min.record.lock.duration.ms = 15000 kafka | group.share.min.session.timeout.ms = 45000 kafka | group.share.partition.max.record.locks = 200 kafka | group.share.record.lock.duration.ms = 30000 kafka | group.share.session.timeout.ms = 45000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.9-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dir.failure.timeout.ms = 30000 kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.initial.task.delay.ms = 30000 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | max.request.partition.size.limit = 2000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.fetch.max.wait.ms = 500 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.copier.thread.pool.size = -1 kafka | remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 kafka | remote.log.manager.copy.quota.window.num = 11 kafka | remote.log.manager.copy.quota.window.size.seconds = 1 kafka | remote.log.manager.expiration.thread.pool.size = -1 kafka | remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 kafka | remote.log.manager.fetch.quota.window.num = 11 kafka | remote.log.manager.fetch.quota.window.size.seconds = 1 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.allow.dn.changes = false kafka | ssl.allow.san.changes = false kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | telemetry.max.bytes = 1048576 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unclean.leader.election.interval.ms = 300000 kafka | unstable.api.versions.enable = false kafka | unstable.feature.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-07 17:01:35,503] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-07 17:01:35,504] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-07 17:01:35,506] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-07 17:01:35,506] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-07 17:01:35,511] INFO [KafkaServer id=1] Rewriting /var/lib/kafka/data/meta.properties (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:35,554] INFO Loading logs from log dirs ArrayBuffer(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-07 17:01:35,557] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2025-06-07 17:01:35,566] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) kafka | [2025-06-07 17:01:35,567] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-07 17:01:35,568] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-07 17:01:35,578] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-07 17:01:35,626] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2025-06-07 17:01:35,636] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-07 17:01:35,645] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-07 17:01:35,688] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2025-06-07 17:01:35,957] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-07 17:01:35,971] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-07 17:01:35,971] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-07 17:01:35,975] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-07 17:01:35,978] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) kafka | [2025-06-07 17:01:35,995] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:35,996] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:35,997] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:35,999] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:35,999] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:36,010] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-07 17:01:36,010] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2025-06-07 17:01:36,032] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-07 17:01:36,053] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749315696043,1749315696043,1,0,0,72057603431006209,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-07 17:01:36,054] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-07 17:01:36,085] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-07 17:01:36,090] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:36,099] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:36,099] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:36,105] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-07 17:01:36,113] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:01:36,119] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,120] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:01:36,128] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,133] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-07 17:01:36,142] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-07 17:01:36,146] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-07 17:01:36,146] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-07 17:01:36,156] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,158] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(metadataVersion=3.9-IV0, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-07 17:01:36,159] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,161] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,164] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,184] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,189] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,193] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-07 17:01:36,193] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-07 17:01:36,199] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,200] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-07 17:01:36,200] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,200] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,201] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,203] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,203] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,204] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,204] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-07 17:01:36,205] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,208] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-07 17:01:36,212] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-07 17:01:36,213] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-07 17:01:36,219] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-07 17:01:36,220] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-07 17:01:36,220] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-07 17:01:36,221] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-07 17:01:36,221] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-07 17:01:36,224] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-07 17:01:36,224] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,227] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2025-06-07 17:01:36,232] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,232] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,233] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,233] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,234] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,235] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2025-06-07 17:01:36,237] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-07 17:01:36,239] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:299) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:136) kafka | [2025-06-07 17:01:36,241] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-07 17:01:36,241] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2025-06-07 17:01:36,243] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-07 17:01:36,244] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:36,249] INFO [KafkaServer id=1] Start processing authorizer futures (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:36,249] INFO [KafkaServer id=1] End processing authorizer futures (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:36,249] INFO [KafkaServer id=1] Start processing enable request processing future (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:36,250] INFO [KafkaServer id=1] End processing enable request processing future (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:36,252] INFO Kafka version: 7.9.1-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-07 17:01:36,252] INFO Kafka commitId: 9ee7460b50277c7131a7a2ea9587efdbd12ef30e (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-07 17:01:36,252] INFO Kafka startTimeMs: 1749315696250 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-07 17:01:36,254] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-07 17:01:36,344] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-07 17:01:36,398] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2025-06-07 17:01:36,401] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-07 17:01:36,483] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new ZK controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread) kafka | [2025-06-07 17:01:41,246] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-07 17:01:41,246] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-07 17:02:07,351] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-07 17:02:07,360] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-07 17:02:07,362] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-07 17:02:07,366] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-07 17:02:07,409] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(vjRFMQ9YQHCh0cMuSzW9xQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(21BHV5FYSYSmimAJjfv0Mg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-07 17:02:07,411] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,417] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,418] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,419] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,419] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-07 17:02:07,419] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,426] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-07 17:02:07,429] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-07 17:02:07,602] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-07 17:02:07,603] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-07 17:02:07,604] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-07 17:02:07,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-07 17:02:07,605] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-07 17:02:07,607] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,608] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,609] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-07 17:02:07,612] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,616] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,617] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,617] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,617] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,618] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,619] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,620] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-07 17:02:07,659] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-07 17:02:07,660] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-07 17:02:07,661] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-07 17:02:07,662] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-07 17:02:07,704] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,715] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,717] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,718] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,720] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,768] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,769] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,769] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,770] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,770] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,781] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,782] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,782] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,782] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,782] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,793] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,794] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,794] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,794] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,795] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,804] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,804] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,805] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,805] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,805] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,817] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,818] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,818] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,818] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,819] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,827] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,831] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,832] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,832] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,832] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,848] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,850] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,850] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,851] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,852] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,870] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,871] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,871] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,871] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,872] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,882] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,883] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,883] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,883] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,883] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,896] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,896] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,897] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,897] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,897] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,903] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,903] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,903] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,903] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,903] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,912] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,913] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,913] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,913] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,913] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,921] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,921] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,921] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,921] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,922] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,929] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,930] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,930] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,930] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,930] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,937] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,937] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,937] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,938] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,938] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,947] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,948] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,948] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,948] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,948] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,957] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,958] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,958] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,958] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,958] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,969] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,971] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,971] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,971] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,971] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:07,979] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:07,980] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:07,980] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,980] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:07,980] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,018] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,019] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,019] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,019] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,019] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,027] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,028] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,028] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,028] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,028] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,034] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,035] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,035] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,035] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,035] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,042] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,043] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,043] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,043] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,043] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,049] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,050] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,050] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,050] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,050] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,058] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,059] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,059] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,059] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,059] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,068] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,069] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,069] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,069] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,069] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,081] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,082] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,082] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,082] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,082] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,093] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,094] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,094] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,094] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,094] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,101] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,101] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,101] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,101] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,101] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,108] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,109] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,109] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,109] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,109] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,115] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,116] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,116] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,116] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,116] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,127] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,128] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,128] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,128] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,128] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,135] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,135] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,135] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,135] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,135] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,143] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,144] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,144] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,144] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,144] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(vjRFMQ9YQHCh0cMuSzW9xQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,154] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,154] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,154] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,154] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,154] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,164] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,166] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,166] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,166] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,166] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,174] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,175] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,175] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,175] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,175] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,181] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,182] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,182] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,182] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,182] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,188] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,188] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,188] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,188] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,188] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,198] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,198] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,198] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,198] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,198] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,203] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,203] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,203] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,203] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,203] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,209] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,209] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,209] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,209] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,210] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,216] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,216] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,216] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,216] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,216] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,224] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,225] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,225] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,225] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,225] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,232] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,233] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,233] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,233] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,233] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,243] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,244] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,244] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,244] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,244] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,251] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,251] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,251] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,252] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,252] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,265] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,265] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,265] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,265] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,265] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,280] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,280] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,280] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,280] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,280] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,298] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-07 17:02:08,299] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-07 17:02:08,299] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,299] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-07 17:02:08,299] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(21BHV5FYSYSmimAJjfv0Mg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-07 17:02:08,305] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-07 17:02:08,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-07 17:02:08,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,320] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,320] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,321] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,324] INFO [Broker id=1] Finished LeaderAndIsr request in 709ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-07 17:02:08,326] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,326] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,327] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,328] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,330] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=21BHV5FYSYSmimAJjfv0Mg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=vjRFMQ9YQHCh0cMuSzW9xQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-07 17:02:08,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,334] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,334] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,334] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,334] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,335] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,335] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,335] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,338] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,338] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-07 17:02:08,338] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,339] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,340] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,341] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-07 17:02:08,345] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-07 17:02:08,434] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d in Empty state. Created a new member id consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,437] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,449] INFO [GroupCoordinator 1]: Preparing to rebalance group 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d in state PreparingRebalance with old generation 0 (__consumer_offsets-8) (reason: Adding new member consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:08,454] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:09,053] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bf1b87b9-7a12-4cf3-b13c-c07f285c20fb in Empty state. Created a new member id consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:09,058] INFO [GroupCoordinator 1]: Preparing to rebalance group bf1b87b9-7a12-4cf3-b13c-c07f285c20fb in state PreparingRebalance with old generation 0 (__consumer_offsets-33) (reason: Adding new member consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:11,467] INFO [GroupCoordinator 1]: Stabilized group 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d generation 1 (__consumer_offsets-8) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:11,474] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:11,490] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:11,502] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7 for group 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:12,060] INFO [GroupCoordinator 1]: Stabilized group bf1b87b9-7a12-4cf3-b13c-c07f285c20fb generation 1 (__consumer_offsets-33) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-07 17:02:12,077] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387 for group bf1b87b9-7a12-4cf3-b13c-c07f285c20fb for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) =================================== ======== Logs from mariadb ======== mariadb | 2025-06-07 17:01:30+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2025-06-07 17:01:31+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2025-06-07 17:01:31+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2025-06-07 17:01:31+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2025-06-07 17:01:31 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2025-06-07 17:01:31 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2025-06-07 17:01:31 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2025-06-07 17:01:33+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2025-06-07 17:01:33+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2025-06-07 17:01:33+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2025-06-07 17:01:33 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2025-06-07 17:01:33 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2025-06-07 17:01:33 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2025-06-07 17:01:33 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2025-06-07 17:01:33 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2025-06-07 17:01:33 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2025-06-07 17:01:33 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2025-06-07 17:01:33 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2025-06-07 17:01:33 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2025-06-07 17:01:33 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2025-06-07 17:01:34+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2025-06-07 17:01:35+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2025-06-07 17:01:35+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2025-06-07 17:01:35+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2025-06-07 17:01:35+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2025-06-07 17:01:36+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2025-06-07 17:01:36 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Starting shutdown... mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Buffer pool(s) dump completed at 250607 17:01:36 mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Shutdown completed; log sequence number 332882; transaction id 298 mariadb | 2025-06-07 17:01:36 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2025-06-07 17:01:36+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2025-06-07 17:01:36+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2025-06-07 17:01:36 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2025-06-07 17:01:36 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2025-06-07 17:01:36 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2025-06-07 17:01:36 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2025-06-07 17:01:36 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: log sequence number 332882; transaction id 299 mariadb | 2025-06-07 17:01:37 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2025-06-07 17:01:37 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2025-06-07 17:01:37 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2025-06-07 17:01:37 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2025-06-07 17:01:37 0 [Note] Server socket created on IP: '::'. mariadb | 2025-06-07 17:01:37 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2025-06-07 17:01:37 0 [Note] InnoDB: Buffer pool(s) load completed at 250607 17:01:37 mariadb | 2025-06-07 17:01:37 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) mariadb | 2025-06-07 17:01:37 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) mariadb | 2025-06-07 17:01:37 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2025-06-07 17:01:37 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) =================================== ======== Logs from apex-pdp ======== policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.5:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.9:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2025-06-07T17:02:08.190+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2025-06-07T17:02:08.374+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = bf1b87b9-7a12-4cf3-b13c-c07f285c20fb policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-07T17:02:08.575+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2025-06-07T17:02:08.575+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2025-06-07T17:02:08.575+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315728573 policy-apex-pdp | [2025-06-07T17:02:08.578+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-1, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-07T17:02:08.591+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-07T17:02:08.592+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2025-06-07T17:02:08.593+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2025-06-07T17:02:08.611+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = bf1b87b9-7a12-4cf3-b13c-c07f285c20fb policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2025-06-07T17:02:08.619+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2025-06-07T17:02:08.620+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2025-06-07T17:02:08.620+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315728619 policy-apex-pdp | [2025-06-07T17:02:08.620+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2025-06-07T17:02:08.621+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=753ebcab-9ca7-4377-a888-4a358b40b6e4, alive=false, publisher=null]]: starting policy-apex-pdp | [2025-06-07T17:02:08.637+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2025-06-07T17:02:08.658+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2025-06-07T17:02:08.675+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2025-06-07T17:02:08.675+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2025-06-07T17:02:08.675+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315728675 policy-apex-pdp | [2025-06-07T17:02:08.677+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=753ebcab-9ca7-4377-a888-4a358b40b6e4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2025-06-07T17:02:08.677+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2025-06-07T17:02:08.677+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2025-06-07T17:02:08.680+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2025-06-07T17:02:08.680+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2025-06-07T17:02:08.682+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2025-06-07T17:02:08.682+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2025-06-07T17:02:08.682+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2025-06-07T17:02:08.682+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2025-06-07T17:02:08.682+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2025-06-07T17:02:08.683+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2025-06-07T17:02:08.706+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2025-06-07T17:02:08.712+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b2a6207a-91c7-440b-9e89-0bac090a466e","timestampMs":1749315728688,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-07T17:02:08.875+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2025-06-07T17:02:08.875+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2025-06-07T17:02:08.875+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2025-06-07T17:02:08.875+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-07T17:02:08.888+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-07T17:02:08.888+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2025-06-07T17:02:08.889+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2025-06-07T17:02:08.893+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2025-06-07T17:02:09.018+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-apex-pdp | [2025-06-07T17:02:09.019+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-apex-pdp | [2025-06-07T17:02:09.019+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2025-06-07T17:02:09.020+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2025-06-07T17:02:09.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] (Re-)joining group policy-apex-pdp | [2025-06-07T17:02:09.054+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Request joining group due to: need to re-join with the given member-id: consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387 policy-apex-pdp | [2025-06-07T17:02:09.056+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2025-06-07T17:02:09.056+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] (Re-)joining group policy-apex-pdp | [2025-06-07T17:02:09.593+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2025-06-07T17:02:09.593+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2025-06-07T17:02:12.064+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387', protocol='range'} policy-apex-pdp | [2025-06-07T17:02:12.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Finished assignment for group at generation 1: {consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2025-06-07T17:02:12.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2-48b53905-d210-4576-88ac-d0cc8402b387', protocol='range'} policy-apex-pdp | [2025-06-07T17:02:12.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2025-06-07T17:02:12.082+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2025-06-07T17:02:12.089+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2025-06-07T17:02:12.103+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bf1b87b9-7a12-4cf3-b13c-c07f285c20fb-2, groupId=bf1b87b9-7a12-4cf3-b13c-c07f285c20fb] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2025-06-07T17:02:28.683+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8a45345a-4009-49e5-8e20-99a19a977544","timestampMs":1749315748683,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-07T17:02:28.718+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8a45345a-4009-49e5-8e20-99a19a977544","timestampMs":1749315748683,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-07T17:02:28.720+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-07T17:02:28.885+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","timestampMs":1749315748814,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.894+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2025-06-07T17:02:28.894+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"822ad1cb-eeff-41d7-8658-6f9641c560de","timestampMs":1749315748894,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-07T17:02:28.898+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f8c887a1-6ee1-4ca9-b0cb-bb1efefa475e","timestampMs":1749315748898,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.905+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"822ad1cb-eeff-41d7-8658-6f9641c560de","timestampMs":1749315748894,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-apex-pdp | [2025-06-07T17:02:28.906+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-07T17:02:28.909+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f8c887a1-6ee1-4ca9-b0cb-bb1efefa475e","timestampMs":1749315748898,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.910+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-07T17:02:28.949+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","timestampMs":1749315748815,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.952+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfe2a329-416f-4fb5-975f-610900939321","timestampMs":1749315748952,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.962+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfe2a329-416f-4fb5-975f-610900939321","timestampMs":1749315748952,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:28.962+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-07T17:02:29.003+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"deda2dff-5cbc-410d-993d-6bf01b477fab","timestampMs":1749315748972,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:29.006+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"deda2dff-5cbc-410d-993d-6bf01b477fab","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a1b5d975-01f1-4040-9c6b-6b351c23a66e","timestampMs":1749315749006,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:29.018+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"deda2dff-5cbc-410d-993d-6bf01b477fab","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a1b5d975-01f1-4040-9c6b-6b351c23a66e","timestampMs":1749315749006,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2025-06-07T17:02:29.018+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2025-06-07T17:02:56.156+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.3 - policyadmin [07/Jun/2025:17:02:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/3.4.1" policy-apex-pdp | [2025-06-07T17:03:56.084+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.3 - policyadmin [07/Jun/2025:17:03:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/3.4.1" =================================== ======== Logs from api ======== policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.5:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2025-06-07T17:01:45.766+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2025-06-07T17:01:45.825+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-07T17:01:45.826+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2025-06-07T17:01:47.658+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-07T17:01:47.743+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-07T17:01:48.150+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2025-06-07T17:01:48.151+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2025-06-07T17:01:48.802+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2025-06-07T17:01:48.812+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-07T17:01:48.814+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-07T17:01:48.815+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2025-06-07T17:01:48.907+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-07T17:01:48.907+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3011 ms policy-api | [2025-06-07T17:01:49.333+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-07T17:01:49.393+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2025-06-07T17:01:49.434+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-07T17:01:49.701+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-07T17:01:49.729+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-07T17:01:49.816+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@312b34e3 policy-api | [2025-06-07T17:01:49.818+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-07T17:01:51.860+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-07T17:01:51.863+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-07T17:01:52.788+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-07T17:01:53.554+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-07T17:01:54.681+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-07T17:01:54.918+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@347b27f3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7f930614, org.springframework.security.web.context.SecurityContextHolderFilter@4812c244, org.springframework.security.web.header.HeaderWriterFilter@6f54a7be, org.springframework.security.web.authentication.logout.LogoutFilter@5ae50044, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5aa2168f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@89537c1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@8b3ea30, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ef0a044, org.springframework.security.web.access.ExceptionTranslationFilter@2f3181d9, org.springframework.security.web.access.intercept.AuthorizationFilter@7d6d93f9] policy-api | [2025-06-07T17:01:55.794+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2025-06-07T17:01:55.891+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-07T17:01:55.919+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-07T17:01:55.938+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.835 seconds (process running for 11.495) policy-api | [2025-06-07T17:02:37.866+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-07T17:02:37.866+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-07T17:02:37.873+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 7 ms policy-api | [2025-06-07T17:02:38.193+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] =================================== ======== Logs from csit-tests ======== policy-csit | Invoking the robot tests from: pap-test.robot pap-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v CLAMP_K8S_TEST: policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Test policy-csit | ============================================================================== policy-csit | LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | LoadNodeTemplates :: Create node templates in database using speci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Healthcheck :: Verify policy pap health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Consolidated Healthcheck :: Verify policy consolidated health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy pap is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeployPdpGroups :: Deploy policies in PdpGroups | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Test | PASS | policy-csit | 22 tests, 22 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas.Pap-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Pap-Test & Pap-Slas.Pap-Slas | PASS | policy-csit | 8 tests, 8 passed, 0 failed policy-csit | ============================================================================== policy-csit | Pap-Test & Pap-Slas | PASS | policy-csit | 30 tests, 30 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 =================================== ======== Logs from policy-db-migrator ======== policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:38 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:39 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:40 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:41 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0706251701380800u 1 2025-06-07 17:01:42 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:42 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0706251701380900u 1 2025-06-07 17:01:43 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0706251701381000u 1 2025-06-07 17:01:43 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0706251701381100u 1 2025-06-07 17:01:43 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0706251701381200u 1 2025-06-07 17:01:43 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0706251701381200u 1 2025-06-07 17:01:43 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0706251701381200u 1 2025-06-07 17:01:43 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0706251701381200u 1 2025-06-07 17:01:43 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0706251701381300u 1 2025-06-07 17:01:43 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0706251701381300u 1 2025-06-07 17:01:43 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0706251701381300u 1 2025-06-07 17:01:43 policy-db-migrator | policyadmin: OK @ 1300 =================================== ======== Logs from pap ======== policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.5:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | Waiting for api port 6969... policy-pap | kafka (172.17.0.6:9092) open policy-pap | api (172.17.0.8:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2025-06-07T17:01:58.202+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2025-06-07T17:01:58.259+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-07T17:01:58.260+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2025-06-07T17:02:00.169+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-07T17:02:00.258+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-07T17:02:00.704+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2025-06-07T17:02:00.705+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2025-06-07T17:02:01.343+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2025-06-07T17:02:01.353+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-07T17:02:01.355+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-07T17:02:01.355+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2025-06-07T17:02:01.450+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-07T17:02:01.450+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3120 ms policy-pap | [2025-06-07T17:02:01.846+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-07T17:02:01.899+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final policy-pap | [2025-06-07T17:02:02.238+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-07T17:02:02.330+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@14982a82 policy-pap | [2025-06-07T17:02:02.332+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-07T17:02:02.365+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2025-06-07T17:02:03.813+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2025-06-07T17:02:03.824+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-07T17:02:04.330+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2025-06-07T17:02:04.752+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2025-06-07T17:02:04.879+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2025-06-07T17:02:05.147+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-07T17:02:05.316+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:05.316+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:05.317+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315725315 policy-pap | [2025-06-07T17:02:05.319+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-1, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-07T17:02:05.320+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-07T17:02:05.325+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:05.326+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:05.326+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315725325 policy-pap | [2025-06-07T17:02:05.326+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-07T17:02:05.624+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-07T17:02:05.781+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-07T17:02:06.012+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@400e741, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3be369fc, org.springframework.security.web.context.SecurityContextHolderFilter@40db6136, org.springframework.security.web.header.HeaderWriterFilter@5d98364c, org.springframework.security.web.authentication.logout.LogoutFilter@1bf10539, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@7577589, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6ee1ddcf, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@70aa03c0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@35744f8, org.springframework.security.web.access.ExceptionTranslationFilter@4fd63c43, org.springframework.security.web.access.intercept.AuthorizationFilter@6a3a56de] policy-pap | [2025-06-07T17:02:06.742+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2025-06-07T17:02:06.837+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-07T17:02:06.855+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-07T17:02:06.874+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-07T17:02:06.874+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-07T17:02:06.875+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-07T17:02:06.876+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-07T17:02:06.876+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-07T17:02:06.876+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-07T17:02:06.876+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-07T17:02:06.878+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cc81ea1 policy-pap | [2025-06-07T17:02:06.890+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-07T17:02:06.890+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 5547e13a-a9ea-4e08-84c4-a39fc30f8f6d policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-07T17:02:06.897+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:06.897+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:06.897+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315726897 policy-pap | [2025-06-07T17:02:06.898+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-07T17:02:06.898+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-07T17:02:06.898+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8ee583f7-944b-4794-b569-56da12c6d6c8, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@72ce8a9b policy-pap | [2025-06-07T17:02:06.898+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8ee583f7-944b-4794-b569-56da12c6d6c8, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-07T17:02:06.899+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-07T17:02:06.903+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:06.903+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:06.903+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315726903 policy-pap | [2025-06-07T17:02:06.903+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-07T17:02:06.904+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-07T17:02:06.904+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8ee583f7-944b-4794-b569-56da12c6d6c8, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-07T17:02:06.904+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-07T17:02:06.904+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5aadcd75-c7a3-4ddb-9a17-77736243b341, alive=false, publisher=null]]: starting policy-pap | [2025-06-07T17:02:06.919+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-07T17:02:06.928+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-07T17:02:06.944+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:06.944+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:06.944+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315726944 policy-pap | [2025-06-07T17:02:06.944+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5aadcd75-c7a3-4ddb-9a17-77736243b341, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-07T17:02:06.944+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d022dd15-5f5c-45dd-ada2-5cdbdbd2f75f, alive=false, publisher=null]]: starting policy-pap | [2025-06-07T17:02:06.945+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-07T17:02:06.945+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-07T17:02:06.948+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2025-06-07T17:02:06.948+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2025-06-07T17:02:06.948+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749315726948 policy-pap | [2025-06-07T17:02:06.948+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d022dd15-5f5c-45dd-ada2-5cdbdbd2f75f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-07T17:02:06.949+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-07T17:02:06.949+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-07T17:02:06.950+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-07T17:02:06.951+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-07T17:02:06.953+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-07T17:02:06.957+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-07T17:02:06.958+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-07T17:02:06.958+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-07T17:02:06.958+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-07T17:02:06.958+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-07T17:02:06.959+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-07T17:02:06.960+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.477 seconds (process running for 10.142) policy-pap | [2025-06-07T17:02:07.359+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-pap | [2025-06-07T17:02:07.360+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-pap | [2025-06-07T17:02:07.361+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-07T17:02:07.370+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-pap | [2025-06-07T17:02:07.396+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.397+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: -llQQseyRW2G0bCR4Q7_Yw policy-pap | [2025-06-07T17:02:07.472+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-07T17:02:07.473+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-07T17:02:07.492+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.514+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.614+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.635+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.728+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.744+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.837+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.850+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.947+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:07.964+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.054+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.067+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.161+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.173+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.271+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.281+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-07T17:02:08.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-07T17:02:08.388+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] (Re-)joining group policy-pap | [2025-06-07T17:02:08.399+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-07T17:02:08.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-07T17:02:08.440+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Request joining group due to: need to re-join with the given member-id: consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7 policy-pap | [2025-06-07T17:02:08.441+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9 policy-pap | [2025-06-07T17:02:08.442+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2025-06-07T17:02:08.442+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-07T17:02:08.444+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2025-06-07T17:02:08.444+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] (Re-)joining group policy-pap | [2025-06-07T17:02:11.474+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7', protocol='range'} policy-pap | [2025-06-07T17:02:11.475+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9', protocol='range'} policy-pap | [2025-06-07T17:02:11.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Finished assignment for group at generation 1: {consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-07T17:02:11.483+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-07T17:02:11.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-62b9d5cb-2977-46f3-a7ac-613cbbdac6f9', protocol='range'} policy-pap | [2025-06-07T17:02:11.503+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-07T17:02:11.504+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3-6aa623c0-a0e8-43bc-9e6c-52ec2ec88ba7', protocol='range'} policy-pap | [2025-06-07T17:02:11.505+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-07T17:02:11.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-07T17:02:11.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-07T17:02:11.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-07T17:02:11.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-07T17:02:11.543+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-07T17:02:11.544+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5547e13a-a9ea-4e08-84c4-a39fc30f8f6d-3, groupId=5547e13a-a9ea-4e08-84c4-a39fc30f8f6d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-07T17:02:28.738+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-07T17:02:28.739+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8a45345a-4009-49e5-8e20-99a19a977544","timestampMs":1749315748683,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-pap | [2025-06-07T17:02:28.739+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8a45345a-4009-49e5-8e20-99a19a977544","timestampMs":1749315748683,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-pap | [2025-06-07T17:02:28.748+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-07T17:02:28.833+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting policy-pap | [2025-06-07T17:02:28.833+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting listener policy-pap | [2025-06-07T17:02:28.833+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting timer policy-pap | [2025-06-07T17:02:28.834+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7, expireMs=1749315778834] policy-pap | [2025-06-07T17:02:28.836+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting enqueue policy-pap | [2025-06-07T17:02:28.836+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7, expireMs=1749315778834] policy-pap | [2025-06-07T17:02:28.836+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate started policy-pap | [2025-06-07T17:02:28.840+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","timestampMs":1749315748814,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.888+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","timestampMs":1749315748814,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.888+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","timestampMs":1749315748814,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.889+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-07T17:02:28.889+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-07T17:02:28.905+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"822ad1cb-eeff-41d7-8658-6f9641c560de","timestampMs":1749315748894,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-pap | [2025-06-07T17:02:28.906+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-07T17:02:28.907+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"822ad1cb-eeff-41d7-8658-6f9641c560de","timestampMs":1749315748894,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup"} policy-pap | [2025-06-07T17:02:28.912+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f8c887a1-6ee1-4ca9-b0cb-bb1efefa475e","timestampMs":1749315748898,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping policy-pap | [2025-06-07T17:02:28.926+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f8c887a1-6ee1-4ca9-b0cb-bb1efefa475e","timestampMs":1749315748898,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping enqueue policy-pap | [2025-06-07T17:02:28.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping timer policy-pap | [2025-06-07T17:02:28.927+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7, expireMs=1749315778834] policy-pap | [2025-06-07T17:02:28.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping listener policy-pap | [2025-06-07T17:02:28.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopped policy-pap | [2025-06-07T17:02:28.927+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7 policy-pap | [2025-06-07T17:02:28.933+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate successful policy-pap | [2025-06-07T17:02:28.933+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 start publishing next request policy-pap | [2025-06-07T17:02:28.933+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange starting policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange starting listener policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange starting timer policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=6e43d7c4-2414-4c17-8d51-ef6c32a29689, expireMs=1749315778934] policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange starting enqueue policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange started policy-pap | [2025-06-07T17:02:28.934+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=6e43d7c4-2414-4c17-8d51-ef6c32a29689, expireMs=1749315778934] policy-pap | [2025-06-07T17:02:28.935+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","timestampMs":1749315748815,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.951+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","timestampMs":1749315748815,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.952+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-07T17:02:28.963+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfe2a329-416f-4fb5-975f-610900939321","timestampMs":1749315748952,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.964+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6e43d7c4-2414-4c17-8d51-ef6c32a29689 policy-pap | [2025-06-07T17:02:28.981+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","timestampMs":1749315748815,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.981+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e43d7c4-2414-4c17-8d51-ef6c32a29689","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bfe2a329-416f-4fb5-975f-610900939321","timestampMs":1749315748952,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange stopping policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange stopping enqueue policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange stopping timer policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=6e43d7c4-2414-4c17-8d51-ef6c32a29689, expireMs=1749315778934] policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange stopping listener policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange stopped policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpStateChange successful policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 start publishing next request policy-pap | [2025-06-07T17:02:28.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting policy-pap | [2025-06-07T17:02:28.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting listener policy-pap | [2025-06-07T17:02:28.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting timer policy-pap | [2025-06-07T17:02:28.986+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=deda2dff-5cbc-410d-993d-6bf01b477fab, expireMs=1749315778986] policy-pap | [2025-06-07T17:02:28.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate starting enqueue policy-pap | [2025-06-07T17:02:28.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate started policy-pap | [2025-06-07T17:02:28.990+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"deda2dff-5cbc-410d-993d-6bf01b477fab","timestampMs":1749315748972,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:29.000+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"deda2dff-5cbc-410d-993d-6bf01b477fab","timestampMs":1749315748972,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:29.001+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-07T17:02:29.004+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-a55f7442-b23a-4c09-8658-41fb5e0face3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"deda2dff-5cbc-410d-993d-6bf01b477fab","timestampMs":1749315748972,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:29.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-07T17:02:29.018+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"deda2dff-5cbc-410d-993d-6bf01b477fab","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a1b5d975-01f1-4040-9c6b-6b351c23a66e","timestampMs":1749315749006,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:29.018+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"deda2dff-5cbc-410d-993d-6bf01b477fab","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"a1b5d975-01f1-4040-9c6b-6b351c23a66e","timestampMs":1749315749006,"name":"apex-b5108a3c-a07a-4bba-970f-001994481908","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id deda2dff-5cbc-410d-993d-6bf01b477fab policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping enqueue policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping timer policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=deda2dff-5cbc-410d-993d-6bf01b477fab, expireMs=1749315778986] policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopping listener policy-pap | [2025-06-07T17:02:29.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate stopped policy-pap | [2025-06-07T17:02:29.023+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 PdpUpdate successful policy-pap | [2025-06-07T17:02:29.023+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5108a3c-a07a-4bba-970f-001994481908 has no more requests policy-pap | [2025-06-07T17:02:39.623+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-07T17:02:39.623+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-07T17:02:39.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-pap | [2025-06-07T17:02:58.834+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=9bb67851-8fa3-40dd-a1e7-1eaf77dce2d7, expireMs=1749315778834] policy-pap | [2025-06-07T17:02:58.934+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=6e43d7c4-2414-4c17-8d51-ef6c32a29689, expireMs=1749315778934] policy-pap | [2025-06-07T17:03:00.047+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. policy-pap | [2025-06-07T17:03:00.097+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-07T17:03:00.103+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-07T17:03:00.107+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2025-06-07T17:03:00.481+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup policy-pap | [2025-06-07T17:03:01.016+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup policy-pap | [2025-06-07T17:03:01.016+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup policy-pap | [2025-06-07T17:03:01.542+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-07T17:03:01.769+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-07T17:03:01.887+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-07T17:03:01.887+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2025-06-07T17:03:01.888+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2025-06-07T17:03:01.910+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-07T17:03:01Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2025-06-07T17:03:01Z, user=policyadmin)] policy-pap | [2025-06-07T17:03:02.565+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-pap | [2025-06-07T17:03:02.566+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2025-06-07T17:03:02.566+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-07T17:03:02.566+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-pap | [2025-06-07T17:03:02.567+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-pap | [2025-06-07T17:03:02.579+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-07T17:03:02Z, user=policyadmin)] policy-pap | [2025-06-07T17:03:02.911+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-pap | [2025-06-07T17:03:02.911+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2025-06-07T17:03:02.912+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2025-06-07T17:03:02.912+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2025-06-07T17:03:02.912+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2025-06-07T17:03:02.912+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2025-06-07T17:03:02.924+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-07T17:03:02Z, user=policyadmin)] policy-pap | [2025-06-07T17:03:03.475+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2025-06-07T17:03:03.476+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2025-06-07T17:04:06.959+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms =================================== ======== Logs from prometheus ======== prometheus | time=2025-06-07T17:01:32.966Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-07T17:01:32.967Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-07T17:01:32.967Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-07T17:01:32.970Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-07T17:01:32.972Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-07T17:01:32.973Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-07T17:01:32.975Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-07T17:01:32.975Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-07T17:01:32.977Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-07T17:01:32.978Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.32µs prometheus | time=2025-06-07T17:01:32.978Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-07T17:01:32.978Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=395.295µs prometheus | time=2025-06-07T17:01:32.978Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=68.974µs wal_replay_duration=426.897µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.32µs total_replay_duration=582.446µs prometheus | time=2025-06-07T17:01:32.981Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-07T17:01:32.981Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-07T17:01:32.981Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-07T17:01:32.983Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-07T17:01:32.983Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.34µs remote_storage=2.97µs web_handler=750ns query_engine=1.09µs scrape=307.929µs scrape_sd=191.182µs notify=126.857µs notify_sd=16.321µs rules=2.16µs tracing=5.34µs filename=/etc/prometheus/prometheus.yml totalDuration=1.328641ms prometheus | time=2025-06-07T17:01:32.983Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-07T17:01:32.983Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" =================================== ======== Logs from simulator ======== simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2025-06-07 17:01:34,433 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2025-06-07 17:01:34,496 INFO org.onap.policy.models.simulators starting simulator | 2025-06-07 17:01:34,496 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2025-06-07 17:01:34,717 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2025-06-07 17:01:34,718 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2025-06-07 17:01:34,834 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2025-06-07 17:01:34,843 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:34,846 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:34,849 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2025-06-07 17:01:34,927 INFO Session workerName=node0 simulator | 2025-06-07 17:01:35,484 INFO Using GSON for REST calls simulator | 2025-06-07 17:01:35,586 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} simulator | 2025-06-07 17:01:35,594 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2025-06-07 17:01:35,600 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1629ms simulator | 2025-06-07 17:01:35,600 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4246 ms. simulator | 2025-06-07 17:01:35,608 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2025-06-07 17:01:35,611 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2025-06-07 17:01:35,612 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,614 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2025-06-07 17:01:35,641 INFO Session workerName=node0 simulator | 2025-06-07 17:01:35,747 INFO Using GSON for REST calls simulator | 2025-06-07 17:01:35,757 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} simulator | 2025-06-07 17:01:35,759 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2025-06-07 17:01:35,759 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1788ms simulator | 2025-06-07 17:01:35,759 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4854 ms. simulator | 2025-06-07 17:01:35,760 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2025-06-07 17:01:35,762 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2025-06-07 17:01:35,762 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,762 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,763 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2025-06-07 17:01:35,765 INFO Session workerName=node0 simulator | 2025-06-07 17:01:35,838 INFO Using GSON for REST calls simulator | 2025-06-07 17:01:35,850 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} simulator | 2025-06-07 17:01:35,851 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2025-06-07 17:01:35,851 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1881ms simulator | 2025-06-07 17:01:35,851 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4911 ms. simulator | 2025-06-07 17:01:35,852 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2025-06-07 17:01:35,855 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2025-06-07 17:01:35,855 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,857 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2025-06-07 17:01:35,858 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2025-06-07 17:01:35,863 INFO Session workerName=node0 simulator | 2025-06-07 17:01:35,917 INFO Using GSON for REST calls simulator | 2025-06-07 17:01:35,924 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} simulator | 2025-06-07 17:01:35,930 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2025-06-07 17:01:35,931 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1960ms simulator | 2025-06-07 17:01:35,931 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4926 ms. simulator | 2025-06-07 17:01:35,932 INFO org.onap.policy.models.simulators started =================================== ======== Logs from zookeeper ======== zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-07 17:01:32,856] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,858] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,858] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,858] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,858] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,860] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-07 17:01:32,860] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-07 17:01:32,860] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-07 17:01:32,860] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-07 17:01:32,861] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-07 17:01:32,862] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,862] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,862] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,862] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,862] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-07 17:01:32,862] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-07 17:01:32,872] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-07 17:01:32,874] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-07 17:01:32,874] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-07 17:01:32,876] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-07 17:01:32,883] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,883] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,883] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,883] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,883] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,884] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,884] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,884] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,884] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,884] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,885] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,885] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,885] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,885] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,885] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,886] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,887] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,888] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,888] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,888] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,888] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,888] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,889] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-07 17:01:32,890] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,890] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,895] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-07 17:01:32,896] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-07 17:01:32,896] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,896] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,897] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,897] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,897] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,897] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-07 17:01:32,899] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,899] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,900] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-07 17:01:32,900] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-07 17:01:32,900] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:32,929] INFO Logging initialized @373ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-07 17:01:32,984] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-07 17:01:32,984] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-07 17:01:32,998] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-07 17:01:33,028] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-07 17:01:33,028] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-07 17:01:33,029] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-07 17:01:33,032] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-07 17:01:33,040] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-07 17:01:33,050] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-07 17:01:33,050] INFO Started @499ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-07 17:01:33,050] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-07 17:01:33,053] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-07 17:01:33,054] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-07 17:01:33,055] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-07 17:01:33,056] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-07 17:01:33,068] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-07 17:01:33,068] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-07 17:01:33,068] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-07 17:01:33,068] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-07 17:01:33,072] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-07 17:01:33,072] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-07 17:01:33,075] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-07 17:01:33,075] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-07 17:01:33,076] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-07 17:01:33,082] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-07 17:01:33,083] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-07 17:01:33,096] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-07 17:01:33,096] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-07 17:01:34,070] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) =================================== Tearing down containers... Container grafana Stopping Container policy-apex-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-apex-pdp Stopped Container policy-apex-pdp Removing Container policy-apex-pdp Removed Container simulator Stopping Container policy-pap Stopping Container simulator Stopped Container simulator Removing Container simulator Removed Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container mariadb Stopping Container mariadb Stopped Container mariadb Removing Container mariadb Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2057 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14238613714768733794.sh ---> sysstat.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins17303314085956980682.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-newdelhi-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-newdelhi-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-newdelhi-project-csit-pap/archives/ [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16495188853839776140.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fzrt from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-fzrt/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16193555255074928267.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-newdelhi-project-csit-pap@tmp/config12384700142631909824tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins17201016174666404386.sh ---> create-netrc.sh [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins16720517090979643747.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fzrt from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-fzrt/bin to PATH [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins14965161907350313071.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-newdelhi-project-csit-pap] $ /bin/bash /tmp/jenkins12861477033145407001.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fzrt from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-fzrt/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-newdelhi-project-csit-pap] $ /bin/bash -l /tmp/jenkins17553840647954383234.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-newdelhi-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fzrt from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-fzrt/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-newdelhi-project-csit-pap/385 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-19311 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 869 24559 0 6738 30842 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:12:b5:32 brd ff:ff:ff:ff:ff:ff inet 10.30.107.5/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86033sec preferred_lft 86033sec inet6 fe80::f816:3eff:fe12:b532/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:8d:29:bb:53 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:8dff:fe29:bb53/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-19311) 06/07/25 _x86_64_ (8 CPU) 16:59:20 LINUX RESTART (8 CPU) 17:00:01 tps rtps wtps bread/s bwrtn/s 17:01:01 212.84 23.52 189.32 2372.28 54546.86 17:02:01 550.82 11.85 538.98 780.47 192174.57 17:03:01 151.69 0.37 151.32 30.53 51127.25 17:04:01 27.35 0.00 27.35 0.00 33300.10 17:05:01 63.82 1.25 62.57 93.45 1742.53 Average: 201.31 7.40 193.91 655.46 66577.46 17:00:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:01:01 26360224 31577288 6578996 19.97 99368 5329868 2312224 6.80 1027244 5104952 3287684 17:02:01 23512764 29784488 9426456 28.62 146740 6233892 8802640 25.90 3024812 5800260 392 17:03:01 22634248 29243664 10304972 31.28 176580 6500360 9492028 27.93 3654528 5996752 1392 17:04:01 22681448 29291876 10257772 31.14 176768 6501004 9458964 27.83 3605532 5997372 220 17:05:01 25119648 31548904 7819572 23.74 178668 6335616 1704464 5.01 1407764 5827216 27828 Average: 24061666 30289244 8877554 26.95 155625 6180148 6354064 18.70 2543976 5745310 663503 17:00:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:01:01 ens3 1982.73 958.84 35189.85 80.80 0.00 0.00 0.00 0.00 17:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:01:01 lo 15.79 15.79 1.57 1.57 0.00 0.00 0.00 0.00 17:02:01 vethc4a40ad 1.35 2.10 0.15 0.20 0.00 0.00 0.00 0.00 17:02:01 vethbb711b2 50.99 62.24 18.96 15.02 0.00 0.00 0.00 0.00 17:02:01 vethabdd35e 24.16 22.38 10.53 16.08 0.00 0.00 0.00 0.00 17:02:01 veth8e66043 1.77 1.93 0.17 0.19 0.00 0.00 0.00 0.00 17:03:01 vethc4a40ad 3.65 4.70 0.67 0.75 0.00 0.00 0.00 0.00 17:03:01 veth8acef70 1.13 1.12 1.58 1.57 0.00 0.00 0.00 0.00 17:03:01 vethbb711b2 26.96 33.44 32.42 9.09 0.00 0.00 0.00 0.00 17:03:01 vethabdd35e 21.93 17.71 6.80 23.82 0.00 0.00 0.00 0.00 17:04:01 vethc4a40ad 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 17:04:01 veth8acef70 1.25 1.07 0.15 0.30 0.00 0.00 0.00 0.00 17:04:01 vethbb711b2 21.96 27.01 27.87 8.75 0.00 0.00 0.00 0.00 17:04:01 vethabdd35e 0.32 0.35 0.58 0.03 0.00 0.00 0.00 0.00 17:05:01 ens3 2621.38 1375.69 37901.49 197.67 0.00 0.00 0.00 0.00 17:05:01 docker0 26.30 36.56 2.78 299.35 0.00 0.00 0.00 0.00 17:05:01 lo 27.43 27.43 2.57 2.57 0.00 0.00 0.00 0.00 Average: ens3 443.60 226.33 7413.08 25.21 0.00 0.00 0.00 0.00 Average: docker0 5.26 7.31 0.56 59.87 0.00 0.00 0.00 0.00 Average: lo 4.70 4.70 0.45 0.45 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-19311) 06/07/25 _x86_64_ (8 CPU) 16:59:20 LINUX RESTART (8 CPU) 17:00:01 CPU %user %nice %system %iowait %steal %idle 17:01:01 all 15.32 0.00 4.30 1.98 0.05 78.36 17:01:01 0 16.31 0.00 4.38 0.25 0.03 79.03 17:01:01 1 35.25 0.00 5.02 1.79 0.07 57.87 17:01:01 2 16.29 0.00 4.35 0.69 0.03 78.63 17:01:01 3 7.74 0.00 4.43 5.02 0.07 82.74 17:01:01 4 24.36 0.00 4.40 1.05 0.07 70.12 17:01:01 5 7.24 0.00 3.99 0.34 0.03 88.40 17:01:01 6 8.30 0.00 4.06 1.29 0.03 86.32 17:01:01 7 7.06 0.00 3.72 5.40 0.03 83.78 17:02:01 all 23.35 0.00 5.52 8.59 0.10 62.44 17:02:01 0 18.52 0.00 4.74 4.10 0.08 72.56 17:02:01 1 20.75 0.00 5.27 15.93 0.08 57.97 17:02:01 2 25.28 0.00 5.40 3.98 0.10 65.24 17:02:01 3 22.36 0.00 6.00 27.32 0.12 44.19 17:02:01 4 32.00 0.00 6.04 7.11 0.08 54.77 17:02:01 5 25.25 0.00 5.57 3.21 0.10 65.87 17:02:01 6 24.74 0.00 5.27 5.24 0.10 64.64 17:02:01 7 17.92 0.00 5.87 1.91 0.10 74.20 17:03:01 all 15.72 0.00 2.44 1.66 0.07 80.10 17:03:01 0 13.54 0.00 1.55 2.68 0.08 82.15 17:03:01 1 14.78 0.00 2.55 0.60 0.07 82.00 17:03:01 2 13.93 0.00 2.16 1.34 0.07 82.51 17:03:01 3 16.13 0.00 2.19 1.01 0.08 80.59 17:03:01 4 18.26 0.00 2.70 0.42 0.07 78.56 17:03:01 5 20.17 0.00 2.91 0.64 0.07 76.21 17:03:01 6 16.32 0.00 2.51 0.20 0.07 80.90 17:03:01 7 12.65 0.00 3.01 6.40 0.07 77.88 17:04:01 all 2.23 0.00 0.26 1.00 0.06 96.45 17:04:01 0 2.69 0.00 0.17 7.86 0.07 89.22 17:04:01 1 2.59 0.00 0.30 0.15 0.07 96.89 17:04:01 2 2.44 0.00 0.25 0.00 0.05 97.26 17:04:01 3 2.26 0.00 0.42 0.00 0.10 97.23 17:04:01 4 2.29 0.00 0.27 0.00 0.05 97.40 17:04:01 5 2.95 0.00 0.25 0.02 0.05 96.73 17:04:01 6 0.95 0.00 0.17 0.00 0.07 98.81 17:04:01 7 1.67 0.00 0.20 0.02 0.05 98.07 17:05:01 all 5.47 0.00 0.82 0.23 0.04 93.44 17:05:01 0 4.45 0.00 0.77 0.10 0.03 94.64 17:05:01 1 4.57 0.00 0.93 0.47 0.03 93.99 17:05:01 2 9.59 0.00 0.87 0.12 0.05 89.38 17:05:01 3 5.20 0.00 0.77 0.13 0.05 93.85 17:05:01 4 2.05 0.00 0.77 0.13 0.03 97.01 17:05:01 5 2.45 0.00 0.68 0.02 0.03 96.81 17:05:01 6 2.61 0.00 0.75 0.05 0.05 96.54 17:05:01 7 12.85 0.00 1.03 0.77 0.05 85.29 Average: all 12.39 0.00 2.66 2.68 0.06 82.20 Average: 0 11.07 0.00 2.31 2.99 0.06 83.58 Average: 1 15.57 0.00 2.81 3.77 0.06 77.78 Average: 2 13.48 0.00 2.60 1.22 0.06 82.65 Average: 3 10.72 0.00 2.76 6.66 0.08 79.78 Average: 4 15.76 0.00 2.83 1.74 0.06 79.62 Average: 5 11.57 0.00 2.67 0.84 0.06 84.86 Average: 6 10.57 0.00 2.55 1.35 0.06 85.47 Average: 7 10.42 0.00 2.76 2.89 0.06 83.86